来自 前端技术 2019-11-22 00:47 的文章
当前位置: 六合联盟网 > 前端技术 > 正文

图表的力量

websocket研究其与话音、图片的技巧

2015/12/26 · JavaScript · 3 评论 · websocket

初藳出处: AlloyTeam   

谈到websocket想比大家不会不熟悉,若是陌生的话也没提到,一句话归纳

“WebSocket protocol 是HTML5风度翩翩种新的合同。它完成了浏览器与服务器全双工通讯”

WebSocket绝相比较古板那二个服务器推手艺大致好了太多,大家得以挥手向comet和长轮询那几个本领说拜拜啦,庆幸大家生存在富有HTML5的时期~

那篇文章大家将分三片段查究websocket

第一是websocket的普及使用,其次是一丝一毫自个儿创制服务器端websocket,最终是至关心珍视要介绍利用websocket制作的三个demo,传输图片和在线语音闲聊室,let’s go

风流倜傥、websocket平淡无奇用法

这边介绍三种自己以为大面积的websocket达成……(专心:本文建构在node上下文景况

1、socket.io

先给demo

JavaScript

var http = require('http'); var io = require('socket.io'); var server = http.createServer(function(req, res) { res.writeHeader(200, {'content-type': 'text/html;charset="utf-8"'}); res.end(); }).listen(8888); var socket =.io.listen(server); socket.sockets.on('connection', function(socket) { socket.emit('xxx', {options}); socket.on('xxx', function(data) { // do someting }); });

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
var http = require('http');
var io = require('socket.io');
 
var server = http.createServer(function(req, res) {
    res.writeHeader(200, {'content-type': 'text/html;charset="utf-8"'});
    res.end();
}).listen(8888);
 
var socket =.io.listen(server);
 
socket.sockets.on('connection', function(socket) {
    socket.emit('xxx', {options});
 
    socket.on('xxx', function(data) {
        // do someting
    });
});

信赖领会websocket的同窗一点都不大概不知底socket.io,因为socket.io太有名了,也很棒,它本身对逾期、握手等都做了管理。作者估量那也是促成websocket使用最多的主意。socket.io最最最美好的一点正是高贵降级,当浏览器不协助websocket时,它会在内部文雅降级为长轮询等,顾客和开辟者是无需关心具体得以达成的,很平价。

只是事情是有两面性的,socket.io因为它的周密也拉动了坑的地点,最入眼的正是肥壮,它的包装也给多少推动了很多的简报冗余,並且名贵降级那意气风发优点,也伴随浏览器标准化的拓宽稳步失去了伟大

Chrome Supported in version 4+
Firefox Supported in version 4+
Internet Explorer Supported in version 10+
Opera Supported in version 10+
Safari Supported in version 5+

在这里处不是责备说socket.io不好,已经被淘汰了,而是不经常候大家也得以设想部分别样的落实~

 

2、http模块

正要说了socket.io肥壮,那现在就来讲说便捷的,首先demo

JavaScript

var http = require(‘http’); var server = http.createServer(); server.on(‘upgrade’, function(req) { console.log(req.headers); }); server.listen(8888);

1
2
3
4
5
6
var http = require(‘http’);
var server = http.createServer();
server.on(‘upgrade’, function(req) {
console.log(req.headers);
});
server.listen(8888);

很简短的贯彻,其实socket.io内部对websocket也是那样实现的,不过前面帮大家封装了有的handle管理,这里大家也能够团结去丰富,给出两张socket.io中的源码图

图片 1

图片 2

 

3、ws模块

末端有个例子会用到,这里就提一下,后边具体看~

 

二、自个儿实现后生可畏套server端websocket

适逢其会说了二种管见所及的websocket实现情势,今后我们考虑,对于开荒者来讲

websocket相对于古板http数据交互作用方式以来,扩张了服务器推送的事件,客商端采用到事件再实行相应管理,开垦起来不相同并非太大呀

那是因为那个模块已经帮大家将多少帧解析这里的坑都填好了,第二有些我们将尝试本身塑造后生可畏套简便的服务器端websocket模块

谢谢次碳酸钴的研商扶植,自己在这里间那某些只是简短说下,假诺对此风野趣好奇的请百度【web技巧研商所】

协和做到服务器端websocket主要有两点,叁个是利用net模块选拔数据流,还会有多个是相比较官方的帧结构图深入分析数据,达成这两有个别就早就做到了任何的最底层专业

率先给二个顾客端发送websocket握手报文的抓包内容

客商端代码很简单

JavaScript

ws = new WebSocket("ws://127.0.0.1:8888");

1
ws = new WebSocket("ws://127.0.0.1:8888");

图片 3

劳动器端要对准那些key验证,正是讲key加上一个一定的字符串后做二次sha1运算,将其结果转换为base64送回到

JavaScript

var crypto = require('crypto'); var WS = '258EAFA5-E914-47DA-95CA-C5AB0DC85B11'; require('net').createServer(function(o) { var key; o.on('data',function(e) { if(!key) { // 获取发送过来的KEY key = e.toString().match(/Sec-WebSocket-Key: (.+)/)[1]; // 连接上WS这些字符串,并做贰回sha1运算,最后转换到Base64 key = crypto.createHash('sha1').update(key+WS).digest('base64'); // 输出再次来到给客商端的数码,这几个字段都以必需的 o.write('HTTP/1.1 101 Switching Protocolsrn'); o.write('Upgrade: websocketrn'); o.write('Connection: Upgradern'); // 这几个字段带上服务器管理后的KEY o.write('Sec-WebSocket-Accept: '+key+'rn'); // 输出空行,使HTTP头截止 o.write('rn'); } }); }).listen(8888);

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
var crypto = require('crypto');
var WS = '258EAFA5-E914-47DA-95CA-C5AB0DC85B11';
 
require('net').createServer(function(o) {
var key;
o.on('data',function(e) {
if(!key) {
// 获取发送过来的KEY
key = e.toString().match(/Sec-WebSocket-Key: (.+)/)[1];
// 连接上WS这个字符串,并做一次sha1运算,最后转换成Base64
key = crypto.createHash('sha1').update(key+WS).digest('base64');
// 输出返回给客户端的数据,这些字段都是必须的
o.write('HTTP/1.1 101 Switching Protocolsrn');
o.write('Upgrade: websocketrn');
o.write('Connection: Upgradern');
// 这个字段带上服务器处理后的KEY
o.write('Sec-WebSocket-Accept: '+key+'rn');
// 输出空行,使HTTP头结束
o.write('rn');
}
});
}).listen(8888);

这么握手部分就曾经成功了,前面正是多少帧深入深入分析与转移的活了

先看下官方提供的帧结构暗示图

图片 4

差不离介绍下

FIN为是还是不是停止的标志

LX570SV为预先留下空间,0

opcode标记数据类型,是还是不是分片,是还是不是二进制深入深入分析,心跳包等等

付给一张opcode对应图

图片 5

MASK是不是利用掩码

Payload len和前边extend payload length表示数据长度,那一个是最麻烦的

PayloadLen独有7位,换来无符号整型的话唯有0到127的取值,这么小的数值当然不可能描述非常的大的数量,由此规定当数码长度小于或等于125时候它才作为数据长度的叙说,假诺这一个值为126,则时候背后的三个字节来存款和储蓄数据长度,借使为127则用后边多个字节来囤积数据长度

Masking-key掩码

上边贴出解析数据帧的代码

JavaScript

function decodeDataFrame(e) { var i = 0, j,s, frame = { FIN: e[i] >> 7, Opcode: e[i++] & 15, Mask: e[i] >> 7, PayloadLength: e[i++] & 0x7F }; if(frame.PayloadLength === 126) { frame.PayloadLength = (e[i++] << 8) + e[i++]; } if(frame.PayloadLength === 127) { i += 4; frame.PayloadLength = (e[i++] << 24) + (e[i++] << 16) + (e[i++] << 8)

  • e[i++]; } if(frame.Mask) { frame.MaskingKey = [e[i++], e[i++], e[i++], e[i++]]; for(j = 0, s = []; j < frame.PayloadLength; j++) { s.push(e[i+j] ^ frame.MaskingKey[j%4]); } } else { s = e.slice(i, i+frame.PayloadLength); } s = new Buffer(s); if(frame.Opcode === 1) { s = s.toString(); } frame.PayloadData = s; return frame; }
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
function decodeDataFrame(e) {
var i = 0,
j,s,
frame = {
FIN: e[i] >> 7,
Opcode: e[i++] & 15,
Mask: e[i] >> 7,
PayloadLength: e[i++] & 0x7F
};
 
if(frame.PayloadLength === 126) {
frame.PayloadLength = (e[i++] << 8) + e[i++];
}
 
if(frame.PayloadLength === 127) {
i += 4;
frame.PayloadLength = (e[i++] << 24) + (e[i++] << 16) + (e[i++] << 8) + e[i++];
}
 
if(frame.Mask) {
frame.MaskingKey = [e[i++], e[i++], e[i++], e[i++]];
 
for(j = 0, s = []; j < frame.PayloadLength; j++) {
s.push(e[i+j] ^ frame.MaskingKey[j%4]);
}
} else {
s = e.slice(i, i+frame.PayloadLength);
}
 
s = new Buffer(s);
 
if(frame.Opcode === 1) {
s = s.toString();
}
 
frame.PayloadData = s;
return frame;
}

接下来是浮动数据帧的

JavaScript

function encodeDataFrame(e) { var s = [], o = new Buffer(e.PayloadData), l = o.length; s.push((e.FIN << 7) + e.Opcode); if(l < 126) { s.push(l); } else if(l < 0x10000) { s.push(126, (l&0xFF00) >> 8, l&0xFF); } else { s.push(127, 0, 0, 0, 0, (l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00) >> 8, l&0xFF); } return Buffer.concat([new Buffer(s), o]); }

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
function encodeDataFrame(e) {
var s = [],
o = new Buffer(e.PayloadData),
l = o.length;
 
s.push((e.FIN << 7) + e.Opcode);
 
if(l < 126) {
s.push(l);
} else if(l < 0x10000) {
s.push(126, (l&0xFF00) >> 8, l&0xFF);
} else {
s.push(127, 0, 0, 0, 0, (l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00) >> 8, l&0xFF);
}
 
return Buffer.concat([new Buffer(s), o]);
}

都以比照帧结构暗暗提示图上的去管理,在那间不细讲,文章首要在下一些,借使对那块感兴趣的话能够运动web技能讨论所~

 

三、websocket传输图片和websocket语音聊天室

正片环节到了,那篇文章最注重的要么显得一下websocket的部分选择境况

1、传输图片

大家先思量传输图片的步子是怎么着,首先服务器收到到顾客端央浼,然后读取图片文件,将二进制数据转载给客商端,客户端如哪个地方理?当然是选用File里德r对象了

先给顾客端代码

JavaScript

var ws = new WebSocket("ws://xxx.xxx.xxx.xxx:8888"); ws.onopen = function(){ console.log("握手成功"); }; ws.onmessage = function(e) { var reader = new FileReader(); reader.onload = function(event) { var contents = event.target.result; var a = new Image(); a.src = contents; document.body.appendChild(a); } reader.readAsDataU奥德赛L(e.data); };

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
var ws = new WebSocket("ws://xxx.xxx.xxx.xxx:8888");
 
ws.onopen = function(){
    console.log("握手成功");
};
 
ws.onmessage = function(e) {
    var reader = new FileReader();
    reader.onload = function(event) {
        var contents = event.target.result;
        var a = new Image();
        a.src = contents;
        document.body.appendChild(a);
    }
    reader.readAsDataURL(e.data);
};

收下到音信,然后readAsDataUGL450L,直接将图纸base64增添到页面中

转到服务器端代码

JavaScript

fs.readdir("skyland", function(err, files) { if(err) { throw err; } for(var i = 0; i < files.length; i++) { fs.readFile('skyland/' + files[i], function(err, data) { if(err) { throw err; } o.write(encodeImgFrame(data)); }); } }); function encodeImgFrame(buf) { var s = [], l = buf.length, ret = []; s.push((1 << 7) + 2); if(l < 126) { s.push(l); } else if(l < 0x10000) { s.push(126, (l&0xFF00) >> 8, l&0xFF); } else { s.push(127, 0, 0, 0, 0, (l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00) >> 8, l&0xFF); } return Buffer.concat([new Buffer(s), buf]); }

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
fs.readdir("skyland", function(err, files) {
if(err) {
throw err;
}
for(var i = 0; i < files.length; i++) {
fs.readFile('skyland/' + files[i], function(err, data) {
if(err) {
throw err;
}
 
o.write(encodeImgFrame(data));
});
}
});
 
function encodeImgFrame(buf) {
var s = [],
l = buf.length,
ret = [];
 
s.push((1 << 7) + 2);
 
if(l < 126) {
s.push(l);
} else if(l < 0x10000) {
s.push(126, (l&0xFF00) >> 8, l&0xFF);
} else {
s.push(127, 0, 0, 0, 0, (l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00) >> 8, l&0xFF);
}
 
return Buffer.concat([new Buffer(s), buf]);
}

注意s.push((1 << 7) + 2)这一句,这里至极直接把opcode写死了为2,对于Binary Frame,那样客商端接受到数量是不会尝试实行toString的,不然会报错~

代码很简短,在这里处向我们享受一下websocket传输图片的进程怎么样

测量试验超多张图纸,总共8.24M

不足为道静态能源服务器须求20s左右(服务器较远卡塔 尔(阿拉伯语:قطر‎

cdn需要2.8s左右

那大家的websocket格局啊??!

答案是雷同要求20s左右,是或不是很深负众望……速度正是慢在传输上,并非服务器读取图片,本机上同样的图片财富,1s左右足以做到……那样看来数据流也敬谢不敏冲破距离的范围提升传输速度

上面大家来探访websocket的另一个用法~

 

用websocket搭建语音闲谈室

先来收拾一下口音闲话室的效果

客商步向频道随后从迈克风输入音频,然后发送给后台转载给频道里面包车型地铁其余人,别的人选拔到消息实行播放

看起来困难在多个地方,第八个是节奏的输入,第二是收纳到数量流进行广播

先说音频的输入,这里运用了HTML5的getUserMedia方法,但是注意了,其生机勃勃法子上线是有油麻地的,最终说,先贴代码

JavaScript

if (navigator.getUserMedia) { navigator.getUserMedia( { audio: true }, function (stream) { var rec = new SRecorder(stream); recorder = rec; }) }

1
2
3
4
5
6
7
8
if (navigator.getUserMedia) {
    navigator.getUserMedia(
        { audio: true },
        function (stream) {
            var rec = new SRecorder(stream);
            recorder = rec;
        })
}

先是个参数是{audio: true},只启用音频,然后成立了多少个SRecorder对象,后续的操作基本上都在这里个指标上开展。那时候假使代码运行在地头的话浏览器应该提示您是还是不是启用迈克风输入,鲜明现在就开发银行了

接下去大家看下SRecorder构造函数是什么,给出主要的片段

JavaScript

var SRecorder = function(stream) { …… var context = new AudioContext(); var audioInput = context.createMediaStreamSource(stream); var recorder = context.createScriptProcessor(4096, 1, 1); …… }

1
2
3
4
5
6
7
var SRecorder = function(stream) {
    ……
   var context = new AudioContext();
    var audioInput = context.createMediaStreamSource(stream);
    var recorder = context.createScriptProcessor(4096, 1, 1);
    ……
}

奥迪(Audi卡塔尔oContext是八个节奏上下文对象,有做过声音过滤管理的同室应该知道“风流倜傥段音频达到扬声器举行广播此前,半路对其进展阻挠,于是大家就得到了点子数据了,那么些拦截专门的工作是由window.奥迪(Audi卡塔 尔(阿拉伯语:قطر‎oContext来做的,我们富有对旋律的操作都基于那个指标”,我们能够通过奥迪oContext创设分化的奥迪oNode节点,然后增加滤镜播放非常的声响

录音原理同样,大家也亟需走奥迪oContext,可是多了一步对迈克风音频输入的收到上,并非像往常管理音频一下用ajax乞求音频的ArrayBuffer对象再decode,迈克风的收受必要用到createMediaStreamSource方法,注意这么些参数就是getUserMedia方法第四个参数的参数

加以createScriptProcessor方法,它官方的表明是:

Creates a ScriptProcessorNode, which can be used for direct audio processing via JavaScript.

——————

回顾下正是那些形式是采用JavaScript去处理音频收罗操作

毕竟到点子搜集了!胜利就在前方!

接下去让我们把Mike风的输入和韵律收集相连起来

JavaScript

audioInput.connect(recorder); recorder.connect(context.destination);

1
2
audioInput.connect(recorder);
recorder.connect(context.destination);

context.destination官方表明如下

The destination property of the AudioContext interface returns an AudioDestinationNoderepresenting the final destination of all audio in the context.

——————

context.destination重返代表在乎况中的音频的最终目标地。

好,到了那儿,大家还索要二个监听音频收集的平地风波

JavaScript

recorder.onaudioprocess = function (e) { audioData.input(e.inputBuffer.getChannelData(0)); }

1
2
3
recorder.onaudioprocess = function (e) {
    audioData.input(e.inputBuffer.getChannelData(0));
}

audioData是二个指标,这几个是在网络找的,笔者就加了一个clear方法因为前边会用到,首要有不行encodeWAV方法比超赞,别人实行了累累的音频压缩和优化,那几个最后会陪伴完整的代码一齐贴出来

那时候总体客户踏向频道随后从Mike风输入音频环节就曾经做到啦,上面就该是向服务器端发送音频流,微微有一点点蛋疼的来了,刚才大家说了,websocket通过opcode分歧能够象征回去的多少是文件照旧二进制数据,而大家onaudioprocess中input进去的是数组,最后播放音响要求的是Blob,{type: ‘audio/wav’}的目标,那样我们就必需要在发送以前将数组调换到WAV的Blob,当时就用到了上边说的encodeWAV方法

服务器如同非常粗大略,只要转载就能够了

地面测验确实能够,可是天坑来了!将顺序跑在服务器上时候调用getUserMedia方法提醒笔者必需在三个双鸭山的景况,也即是亟需https,那象征ws也非得换来wss……于是服务器代码就不曾采用我们和煦包装的拉手、深入分析和编码了,代码如下

JavaScript

var https = require('https'); var fs = require('fs'); var ws = require('ws'); var userMap = Object.create(null); var options = { key: fs.readFileSync('./privatekey.pem'), cert: fs.readFileSync('./certificate.pem') }; var server = https.createServer(options, function(req, res) { res.writeHead({ 'Content-Type' : 'text/html' }); fs.readFile('./testaudio.html', function(err, data) { if(err) { return ; } res.end(data); }); }); var wss = new ws.Server({server: server}); wss.on('connection', function(o) { o.on('message', function(message) { if(message.indexOf('user') === 0) { var user = message.split(':')[1]; userMap[user] = o; } else { for(var u in userMap) { userMap[u].send(message); } } }); }); server.listen(8888);

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
var https = require('https');
var fs = require('fs');
var ws = require('ws');
var userMap = Object.create(null);
var options = {
    key: fs.readFileSync('./privatekey.pem'),
    cert: fs.readFileSync('./certificate.pem')
};
var server = https.createServer(options, function(req, res) {
    res.writeHead({
        'Content-Type' : 'text/html'
    });
 
    fs.readFile('./testaudio.html', function(err, data) {
        if(err) {
            return ;
        }
 
        res.end(data);
    });
});
 
var wss = new ws.Server({server: server});
 
wss.on('connection', function(o) {
    o.on('message', function(message) {
if(message.indexOf('user') === 0) {
    var user = message.split(':')[1];
    userMap[user] = o;
} else {
    for(var u in userMap) {
userMap[u].send(message);
    }
}
    });
});
 
server.listen(8888);

代码依旧超轻松的,使用https模块,然后用了开班说的ws模块,userMap是模拟的频道,只兑现转载的主干职能

应用ws模块是因为它非常https达成wss的确是太有利了,和逻辑代码0冲突

https的搭建在这里地就不提了,主借使急需私钥、CS帕杰罗证书具名和证书文件,感兴趣的同桌能够驾驭下(可是不驾驭的话在现网意况也用持续getUserMedia……卡塔 尔(英语:State of Qatar)

下边是完好的前端代码

JavaScript

var a = document.getElementById('a'); var b = document.getElementById('b'); var c = document.getElementById('c'); navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia; var gRecorder = null; var audio = document.querySelector('audio'); var door = false; var ws = null; b.onclick = function() { if(a.value === '') { alert('请输入客商名'); return false; } if(!navigator.getUserMedia) { alert('抱歉您的设备无西班牙语音闲聊'); return false; } SRecorder.get(function (rec) { gRecorder = rec; }); ws = new WebSocket("wss://x.x.x.x:8888"); ws.onopen = function() { console.log('握手成功'); ws.send('user:' + a.value); }; ws.onmessage = function(e) { receive(e.data); }; document.onkeydown = function(e) { if(e.keyCode === 65) { if(!door) { gRecorder.start(); door = true; } } }; document.onkeyup = function(e) { if(e.keyCode === 65) { if(door) { ws.send(gRecorder.getBlob()); gRecorder.clear(); gRecorder.stop(); door = false; } } } } c.onclick = function() { if(ws) { ws.close(); } } var SRecorder = function(stream) { config = {}; config.sampleBits = config.smapleBits || 8; config.sampleRate = config.sampleRate || (44100 / 6); var context = new 奥迪(Audi卡塔 尔(英语:State of Qatar)oContext(); var audioInput = context.createMediaStreamSource(stream); var recorder = context.createScriptProcessor(4096, 1, 1); var audioData = { size: 0 //录音文件长度 , buffer: [] //录音缓存 , inputSampleRate: context.sampleRate //输入采集样板率 , input萨姆pleBits: 16 //输入采集样本数位 8, 16 , outputSampleRate: config.sampleRate //输出采集样本率 , oututSampleBits: config.sampleBits //输出采集样板数位 8, 16 , clear: function() { this.buffer = []; this.size = 0; } , input: function (data) { this.buffer.push(new Float32Array(data)); this.size += data.length; } , compress: function () { //合併压缩 //合併 var data = new Float32Array(this.size); var offset = 0; for (var i = 0; i < this.buffer.length; i++) { data.set(this.buffer[i], offset); offset += this.buffer[i].length; } //压缩 var compression = parseInt(this.inputSampleRate / this.outputSampleRate); var length = data.length / compression; var result = new Float32Array(length); var index = 0, j = 0; while (index < length) { result[index] = data[j]; j += compression; index++; } return result; } , encodeWAV: function () { var sampleRate = Math.min(this.inputSampleRate, this.outputSampleRate); var sampleBits = Math.min(this.inputSampleBits, this.oututSampleBits); var bytes = this.compress(); var dataLength = bytes.length * (sampleBits / 8); var buffer = new ArrayBuffer(44 + dataLength); var data = new DataView(buffer); var channelCount = 1;//单声道 var offset = 0; var writeString = function (str) { for (var i = 0; i < str.length; i++) { data.setUint8(offset + i, str.charCodeAt(i)); } }; // 财富交换文件标记符 writeString('传祺IFF'); offset += 4; // 下个地点起头到文件尾总字节数,即文件大小-8 data.setUint32(offset, 36 + dataLength, true); offset += 4; // WAV文件评释 writeString('WAVE'); offset += 4; // 波形格式标记 writeString('fmt '); offset += 4; // 过滤字节,平日为 0x10 = 16 data.setUint32(offset, 16, true); offset += 4; // 格式连串 (PCM格局采集样板数据) data.setUint16(offset, 1, true); offset += 2; // 通道数 data.setUint16(offset, channelCount, true); offset += 2; // 采集样板率,每秒样品数,表示各种通道的播放速度 data.setUint32(offset, sampleRate, true); offset += 4; // 波形数据传输率 (每秒平均字节数) 单声道×每秒数据位数×每样品数据位/8 data.setUint32(offset, channelCount * sampleRate * (sampleBits / 8), true); offset += 4; // 快数据调解数 采集样板二回占用字节数 单声道×每样板的数目位数/8 data.setUint16(offset, channelCount * (sampleBits / 8), true); offset += 2; // 每样品数量位数 data.setUint16(offset, sampleBits, true); offset += 2; // 数据标志符 writeString('data'); offset += 4; // 采集样本数据总的数量,即数据总大小-44 data.setUint32(offset, dataLength, true); offset += 4; // 写入采样数据 if (sampleBits === 8) { for (var i = 0; i < bytes.length; i++, offset++) { var s = Math.max(-1, Math.min(1, bytes[i])); var val = s < 0 ? s * 0x8000 : s * 0x7FFF; val = parseInt(255 / (65535 / (val + 32768))); data.setInt8(offset, val, true); } } else { for (var i = 0; i < bytes.length; i++, offset += 2) { var s = Math.max(-1, Math.min(1, bytes[i])); data.setInt16(offset, s < 0 ? s * 0x8000 : s * 0x7FFF, true); } } return new Blob([data], { type: 'audio/wav' }); } }; this.start = function () { audioInput.connect(recorder); recorder.connect(context.destination); } this.stop = function () { recorder.disconnect(); } this.getBlob = function () { return audioData.encodeWAV(); } this.clear = function() { audioData.clear(); } recorder.onaudioprocess = function (e) { audioData.input(e.inputBuffer.getChannelData(0)); } }; SRecorder.get = function (callback) { if (callback) { if (navigator.getUserMedia) { navigator.getUserMedia( { audio: true }, function (stream) { var rec = new SRecorder(stream); callback(rec); }) } } } function receive(e) { audio.src = window.URL.createObjectURL(e); }

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
var a = document.getElementById('a');
var b = document.getElementById('b');
var c = document.getElementById('c');
 
navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia;
 
var gRecorder = null;
var audio = document.querySelector('audio');
var door = false;
var ws = null;
 
b.onclick = function() {
    if(a.value === '') {
        alert('请输入用户名');
        return false;
    }
    if(!navigator.getUserMedia) {
        alert('抱歉您的设备无法语音聊天');
        return false;
    }
 
    SRecorder.get(function (rec) {
        gRecorder = rec;
    });
 
    ws = new WebSocket("wss://x.x.x.x:8888");
 
    ws.onopen = function() {
        console.log('握手成功');
        ws.send('user:' + a.value);
    };
 
    ws.onmessage = function(e) {
        receive(e.data);
    };
 
    document.onkeydown = function(e) {
        if(e.keyCode === 65) {
            if(!door) {
                gRecorder.start();
                door = true;
            }
        }
    };
 
    document.onkeyup = function(e) {
        if(e.keyCode === 65) {
            if(door) {
                ws.send(gRecorder.getBlob());
                gRecorder.clear();
                gRecorder.stop();
                door = false;
            }
        }
    }
}
 
c.onclick = function() {
    if(ws) {
        ws.close();
    }
}
 
var SRecorder = function(stream) {
    config = {};
 
    config.sampleBits = config.smapleBits || 8;
    config.sampleRate = config.sampleRate || (44100 / 6);
 
    var context = new AudioContext();
    var audioInput = context.createMediaStreamSource(stream);
    var recorder = context.createScriptProcessor(4096, 1, 1);
 
    var audioData = {
        size: 0          //录音文件长度
        , buffer: []     //录音缓存
        , inputSampleRate: context.sampleRate    //输入采样率
        , inputSampleBits: 16       //输入采样数位 8, 16
        , outputSampleRate: config.sampleRate    //输出采样率
        , oututSampleBits: config.sampleBits       //输出采样数位 8, 16
        , clear: function() {
            this.buffer = [];
            this.size = 0;
        }
        , input: function (data) {
            this.buffer.push(new Float32Array(data));
            this.size += data.length;
        }
        , compress: function () { //合并压缩
            //合并
            var data = new Float32Array(this.size);
            var offset = 0;
            for (var i = 0; i < this.buffer.length; i++) {
                data.set(this.buffer[i], offset);
                offset += this.buffer[i].length;
            }
            //压缩
            var compression = parseInt(this.inputSampleRate / this.outputSampleRate);
            var length = data.length / compression;
            var result = new Float32Array(length);
            var index = 0, j = 0;
            while (index < length) {
                result[index] = data[j];
                j += compression;
                index++;
            }
            return result;
        }
        , encodeWAV: function () {
            var sampleRate = Math.min(this.inputSampleRate, this.outputSampleRate);
            var sampleBits = Math.min(this.inputSampleBits, this.oututSampleBits);
            var bytes = this.compress();
            var dataLength = bytes.length * (sampleBits / 8);
            var buffer = new ArrayBuffer(44 + dataLength);
            var data = new DataView(buffer);
 
            var channelCount = 1;//单声道
            var offset = 0;
 
            var writeString = function (str) {
                for (var i = 0; i < str.length; i++) {
                    data.setUint8(offset + i, str.charCodeAt(i));
                }
            };
 
            // 资源交换文件标识符
            writeString('RIFF'); offset += 4;
            // 下个地址开始到文件尾总字节数,即文件大小-8
            data.setUint32(offset, 36 + dataLength, true); offset += 4;
            // WAV文件标志
            writeString('WAVE'); offset += 4;
            // 波形格式标志
            writeString('fmt '); offset += 4;
            // 过滤字节,一般为 0x10 = 16
            data.setUint32(offset, 16, true); offset += 4;
            // 格式类别 (PCM形式采样数据)
            data.setUint16(offset, 1, true); offset += 2;
            // 通道数
            data.setUint16(offset, channelCount, true); offset += 2;
            // 采样率,每秒样本数,表示每个通道的播放速度
            data.setUint32(offset, sampleRate, true); offset += 4;
            // 波形数据传输率 (每秒平均字节数) 单声道×每秒数据位数×每样本数据位/8
            data.setUint32(offset, channelCount * sampleRate * (sampleBits / 8), true); offset += 4;
            // 快数据调整数 采样一次占用字节数 单声道×每样本的数据位数/8
            data.setUint16(offset, channelCount * (sampleBits / 8), true); offset += 2;
            // 每样本数据位数
            data.setUint16(offset, sampleBits, true); offset += 2;
            // 数据标识符
            writeString('data'); offset += 4;
            // 采样数据总数,即数据总大小-44
            data.setUint32(offset, dataLength, true); offset += 4;
            // 写入采样数据
            if (sampleBits === 8) {
                for (var i = 0; i < bytes.length; i++, offset++) {
                    var s = Math.max(-1, Math.min(1, bytes[i]));
                    var val = s < 0 ? s * 0x8000 : s * 0x7FFF;
                    val = parseInt(255 / (65535 / (val + 32768)));
                    data.setInt8(offset, val, true);
                }
            } else {
                for (var i = 0; i < bytes.length; i++, offset += 2) {
                    var s = Math.max(-1, Math.min(1, bytes[i]));
                    data.setInt16(offset, s < 0 ? s * 0x8000 : s * 0x7FFF, true);
                }
            }
 
            return new Blob([data], { type: 'audio/wav' });
        }
    };
 
    this.start = function () {
        audioInput.connect(recorder);
        recorder.connect(context.destination);
    }
 
    this.stop = function () {
        recorder.disconnect();
    }
 
    this.getBlob = function () {
        return audioData.encodeWAV();
    }
 
    this.clear = function() {
        audioData.clear();
    }
 
    recorder.onaudioprocess = function (e) {
        audioData.input(e.inputBuffer.getChannelData(0));
    }
};
 
SRecorder.get = function (callback) {
    if (callback) {
        if (navigator.getUserMedia) {
            navigator.getUserMedia(
                { audio: true },
                function (stream) {
                    var rec = new SRecorder(stream);
                    callback(rec);
                })
        }
    }
}
 
function receive(e) {
    audio.src = window.URL.createObjectURL(e);
}

注意:按住a键说话,放开a键发送

友善有尝试不开关实时对讲,通过setInterval发送,但开掘杂音有一点点重,效果不好,这么些必要encodeWAV再意气风发层的卷入,多去除景况杂音的作用,本身筛选了尤其便捷的按钮说话的情势

 

那篇随笔里首先远望了websocket的现在,然后根据专门的学问我们同心合力尝试深入解析和转移数据帧,对websocket有了更加深一步的打听

终极经过七个demo见到了websocket的潜在的力量,关于语音闲谈室的demo涉及的较广,未有接触过奥迪(Audi卡塔 尔(英语:State of Qatar)oContext对象的同学最棒先精通下奥迪oContext

小说到此处就终止啦~有哪些主张和主题材料迎接大家提议来一同谈谈研究~

 

1 赞 11 收藏 3 评论

图片 6

本文由六合联盟网发布于前端技术,转载请注明出处:图表的力量

关键词: