WebRTC is an open source project to enable realtime communication of audio, video and data in Web and native apps.
WebRTC has several JavaScript APIs — click the links to see demos.
getUserMedia()
: capture audio and video.MediaRecorder
: record audio and video.RTCPeerConnection
: stream audio and video between users.RTCDataChannel
: stream data between users.In Firefox, Opera and in Chrome on desktop and Android. WebRTC is also available for native apps on iOS and Android.
WebRTC uses RTCPeerConnection to communicate streaming data between browsers, but also needs a mechanism to coordinate communication and to send control messages, a process known as signaling. Signaling methods and protocols are not specified by WebRTC. In this codelab we use Node, but there are many alternatives.
WebRTC is designed to work peer-to-peer, so users can connect by the most direct route possible. However, WebRTC is built to cope with real-world networking: client applications need to traverse NAT gateways and firewalls, and peer to peer networking needs fallbacks in case direct connection fails. As part of this process, the WebRTC APIs use STUN servers to get the IP address of your computer, and TURN servers to function as relay servers in case peer-to-peer communication fails. (WebRTC in the real world explains in more detail.)
Encryption is mandatory for all WebRTC components, and its JavaScript APIs can only be used from secure origins (HTTPS or localhost). Signaling mechanisms aren't defined by WebRTC standards, so it's up to you make sure to use secure protocols.
Build an app to get video and take snapshots with your webcam and share them peer-to-peer via WebRTC. Along the way you'll learn how to use the core WebRTC APIs and set up a messaging server using Node.
If you're familiar with git, you can download the code for this codelab from GitHub by cloning it:
git clone https://github.com/googlecodelabs/webrtc-web
Alternatively, click the following button to download a .zip file of the code:
Open the downloaded zip file. This will unpack a project folder (adaptive-web-media) that contains one folder for each step of this codelab, along with all of the resources you will need.
You'll be doing all your coding work in the directory named work.
The step-nn folders contain a finished version for each step of this codelab. They are there for reference.
While you're free to use your own web server, this codelab is designed to work well with the Chrome Web Server. If you don't have that app installed yet, you can install it from the Chrome Web Store.
After installing the Web Server for Chrome app, click on the Chrome Apps shortcut from the bookmarks bar, a New Tab page, or from the App Launcher:
Click on the Web Server icon:
Next, you'll see this dialog, which allows you to configure your local web server:
Click the CHOOSE FOLDER button, and select the work folder you just created. This will enable you to view your work in progress in Chrome via the URL highlighted in the Web Server dialog in the Web Server URL(s) section.
Under Options, check the box next to Automatically show index.html as shown below:
Then stop and restart the server by sliding the toggle labeled Web Server: STARTED to the left and then back to the right.
Now visit your work site in your web browser by clicking on the highlighted Web Server URL. You should see a page that looks like this, which corresponds to work/index.html:
Obviously, this app is not yet doing anything interesting — so far, it's just a minimal skeleton we're using to make sure your web server is working properly. We'll add functionality and layout features in subsequent steps.
In this step you'll find out how to:
A complete version of this step is in the step-01 folder.
Add a video
element and a script
element to index.html in your work directory:
<!DOCTYPE html>
<html>
<head>
<title>Realtime communication with WebRTC</title>
<link rel="stylesheet" href="css/main.css" />
</head>
<body>
<h1>Realtime communication with WebRTC</h1>
<video autoplay></video>
<script src="js/main.js"></script>
</body>
</html>
Add the following to main.js in your js folder:
'use strict';
navigator.getUserMedia = navigator.getUserMedia ||
navigator.webkitGetUserMedia || navigator.mozGetUserMedia;
var constraints = {
audio: false,
video: true
};
var video = document.querySelector('video');
function successCallback(stream) {
window.stream = stream; // stream available to console
if (window.URL) {
video.src = window.URL.createObjectURL(stream);
} else {
video.src = stream;
}
}
function errorCallback(error) {
console.log('navigator.getUserMedia error: ', error);
}
navigator.getUserMedia(constraints, successCallback, errorCallback);
Open index.html in your browser and you should see something like this (featuring the view from your webcam, of course!):
getUserMedia()
is called like this:
navigator.getUserMedia(constraints, successCallback, errorCallback);
This technology is still relatively new, so browsers are still using prefixed names for getUserMedia
. Hence the shim code at the top of main.js!
The constraints
argument allows you to specify what media to get — in this example, video and not audio:
var constraints = {
audio: false,
video: true
};
If getUserMedia()
is successful, the video stream from the webcam is set as the source of the video element:
function successCallback(stream) {
window.stream = stream; // stream available to console
if (window.URL) {
video.src = window.URL.createObjectURL(stream);
} else {
video.src = stream;
}
}
stream
object passed to getUserMedia()
is in global scope, so you can inspect it from the browser console: open the console, type stream and press Return. (To view the console in Chrome, press Ctrl-Shift-J, or Command-Option-J if you're on a Mac.)stream.getVideoTracks()
return?stream.getVideoTracks()[0].stop()
.{audio: true, video: true}
?video {
-webkit-filter: blur(4px) invert(1) opacity(0.5);
}
video {
filter: hue-rotate(180deg) saturate(200%);
-moz-filter: hue-rotate(180deg) saturate(200%);
-webkit-filter: hue-rotate(180deg) saturate(200%);
}
In this step you learned how to:
A complete version of this step is in the step-01 folder.
autoplay
attribute on the video
element. Without that, you'll only see a single frame!getUserMedia()
constraints. Take a look at the demo at webrtc.github.io/samples/src/content/peerconnection/constraints. As you'll see, there are lots of interesting WebRTC samples on that site.width
and max-width
to set a preferred size and a maximum size for the video. The browser will calculate the height automatically:video {
max-width: 100%;
width: 320px;
}
You've got video, but how do you stream it? Find out in the next step!
In this step you'll find out how to:
A complete version of this step is in the step-2 folder.
RTCPeerConnection is an API for making WebRTC calls to stream video and audio, and exchange data.
This example sets up a connection between two RTCPeerConnection objects (known as peers) on the same page.
Not much practical use, but good for understanding how RTCPeerConnection works.
In index.html replace the single video element with two video elements and three buttons:
<video id="localVideo" autoplay></video>
<video id="remoteVideo" autoplay></video>
<div>
<button id="startButton">Start</button>
<button id="callButton">Call</button>
<button id="hangupButton">Hang Up</button>
</div>
One video element will display the stream from getUserMedia()
and the other will show the same video streamed via RTCPeerconnection. (In a real world application, one video element would display the local stream and the other the remote stream.)
Add a link to adapter.js above the link to main.js:
<script src="js/lib/adapter.js"></script>
Index.html should now look like this:
<!DOCTYPE html>
<html>
<head>
<title>Realtime communication with WebRTC</title>
<link rel="stylesheet" href="css/main.css" />
</head>
<body>
<h1>Realtime communication with WebRTC</h1>
<video id="localVideo" autoplay></video>
<video id="remoteVideo" autoplay></video>
<div>
<button id="startButton">Start</button>
<button id="callButton">Call</button>
<button id="hangupButton">Hang Up</button>
</div>
<script src="js/lib/adapter.js"></script>
<script src="js/main.js"></script>
</body>
</html>
Replace main.js with the version in the step-02 folder.
Open index.html, click the Start button to get video from your webcam, and click Call to make the peer connection. You should see the same video (from your webcam) in both video elements. View the browser console to see WebRTC logging.
This step does a lot...
WebRTC uses the RTCPeerConnection API to set up a connection to stream video between WebRTC clients, known as peers.
In this example, the two RTCPeerConnection objects are on the same page: pc1
and pc2
. Not much practical use, but good for demonstrating how the APIs work.
Setting up a call between WebRTC peers involves three tasks:
getUserMedia()
.Imagine that Alice and Bob want to use RTCPeerConnection to set up a video chat.
First up, Alice and Bob exchange network information. The expression 'finding candidates' refers to the process of finding network interfaces and ports using the ICE framework.
onicecandidate
handler. This corresponds to the following code from main.js:pc1 = new RTCPeerConnection(servers);
trace('Created local peer connection object pc1');
pc1.onicecandidate = function(e) {
onIceCandidate(pc1, e);
};
getUserMedia()
and adds the stream passed to that:pc1.addStream(localStream);
onicecandidate
handler from step 1. is called when network candidates become available.addIceCandidate()
, to add the candidate to the remote peer description:function onIceCandidate(pc, event) {
if (event.candidate) {
getOtherPc(pc).addIceCandidate(
new RTCIceCandidate(event.candidate)
).then(
function() {
onAddIceCandidateSuccess(pc);
},
function(err) {
onAddIceCandidateError(pc, err);
}
);
trace(getName(pc) + ' ICE candidate: \n' + event.candidate.candidate);
}
}
WebRTC peers also need to find out and exchange local and remote audio and video media information, such as resolution and codec capabilities. Signaling to exchange media configuration information proceeds by exchanging blobs of metadata, known as an offer and an answer, using the Session Description Protocol format, known as SDP:
createOffer()
method. The promise returned provides an RTCSessionDescription: Alice's local session description:pc1.createOffer(
offerOptions
).then(
onCreateOfferSuccess,
onCreateSessionDescriptionError
);
setLocalDescription()
and then sends this session description to Bob via their signaling channel.setRemoteDescription()
.createAnswer()
method, passing it the remote description he got from Alice, so a local session can be generated that is compatible with hers. The createAnswer()
promise passes on an RTCSessionDescription: Bob sets that as the local description and sends it to Alice.setRemoteDescription()
. function onCreateOfferSuccess(desc) {
pc1.setLocalDescription(desc).then(
function() {
onSetLocalSuccess(pc1);
},
onSetSessionDescriptionError
);
pc2.setRemoteDescription(desc).then(
function() {
onSetRemoteSuccess(pc2);
},
onSetSessionDescriptionError
);
// Since the 'remote' side has no media stream we need
// to pass in the right constraints in order for it to
// accept the incoming offer of audio and video.
pc2.createAnswer().then(
onCreateAnswerSuccess,
onCreateSessionDescriptionError
);
}
function onCreateAnswerSuccess(desc) {
pc2.setLocalDescription(desc).then(
function() {
onSetLocalSuccess(pc2);
},
onSetSessionDescriptionError
);
pc1.setRemoteDescription(desc).then(
function() {
onSetRemoteSuccess(pc1);
},
onSetSessionDescriptionError
);
}
localStream
, pc1
and pc2
.pc1.localDescription
. What does SDP format look like?In this step you learned how to:
A complete version of this step is in the step-2 folder.
This step shows how to use WebRTC to stream video between peers — but this codelab is also about data!
In the next step find out how to stream arbitrary data using RTCDataChannel.
A complete version of this step is in the step-03 folder.
For this step, we'll use WebRTC data channels to send text between two textarea
elements on the same page. That's not very useful, but does demonstrate how WebRTC can be used to share data as well as streaming video.
Remove the video and button elements from index.html and replace them with the following HTML:
<textarea id="dataChannelSend" disabled
placeholder="Press Start, enter some text, then press Send."></textarea>
<textarea id="dataChannelReceive" disabled></textarea>
<div id="buttons">
<button id="startButton">Start</button>
<button id="sendButton">Send</button>
<button id="closeButton">Stop</button>
</div>
One textarea will be for entering text, the other will display the text as streamed between peers.
index.html should now look like this:
<!DOCTYPE html>
<html>
<head>
<title>Realtime communication with WebRTC</title>
<link rel="stylesheet" href="css/main.css" />
</head>
<body>
<h1>Realtime communication with WebRTC</h1>
<textarea id="dataChannelSend" disabled
placeholder="Press Start, enter some text, then press Send."></textarea>
<textarea id="dataChannelReceive" disabled></textarea>
<div id="buttons">
<button id="startButton">Start</button>
<button id="sendButton">Send</button>
<button id="closeButton">Stop</button>
</div>
<script src="js/lib/adapter.js"></script>
<script src="js/main.js"></script>
</body>
</html>
Update your JavaScript
Replace main.js with the contents of step-03/js/main.js.
Try out streaming data between peers: open index.html, press Start to set up the peer connection, enter some text in the textarea
on the left, then click Send to transfer the text using WebRTC data channels.
This code uses RTCPeerConnection and RTCDataChannel to enable exchange of text messages.
Much of the code in this step is the same as for the RTCPeerConnection example.
The sendData()
and createConnection()
functions have most of the new code:
function createConnection() {
dataChannelSend.placeholder = '';
var servers = null;
pcConstraint = null;
dataConstraint = null;
trace('Using SCTP based data channels');
// For SCTP, reliable and ordered delivery is true by default.
// Add localConnection to global scope to make it visible
// from the browser console.
window.localConnection = localConnection =
new RTCPeerConnection(servers, pcConstraint);
trace('Created local peer connection object localConnection');
sendChannel = localConnection.createDataChannel('sendDataChannel',
dataConstraint);
trace('Created send data channel');
localConnection.onicecandidate = iceCallback1;
sendChannel.onopen = onSendChannelStateChange;
sendChannel.onclose = onSendChannelStateChange;
// Add remoteConnection to global scope to make it visible
// from the browser console.
window.remoteConnection = remoteConnection =
new RTCPeerConnection(servers, pcConstraint);
trace('Created remote peer connection object remoteConnection');
remoteConnection.onicecandidate = iceCallback2;
remoteConnection.ondatachannel = receiveChannelCallback;
localConnection.createOffer().then(
gotDescription1,
onCreateSessionDescriptionError
);
startButton.disabled = true;
closeButton.disabled = false;
}
function sendData() {
var data = dataChannelSend.value;
sendChannel.send(data);
trace('Sent Data: ' + data);
}
The syntax of RTCDataChannel is deliberately similar to WebSocket, with a send()
method and a message
event.
Notice the use of dataConstraint
. Data channels can be configured to enable different types of data sharing — for example, prioritizing reliable delivery over performance. You can find out more information about options at Mozilla Developer Network.
In this step you learned how to:
A complete version of this step is in the step-03 folder.
You've learned how to exchange data between peers on the same page, but how do you do this between different machines? First, you need to set up a signaling channel to exchange metadata messages. Find out how in the next step!
In this step, you'll find out how to:
npm
to install project dependencies as specified in package.json A complete version of this step is in the step-04 folder.
In order to set up and maintain a WebRTC call, WebRTC clients (peers) need to exchange metadata:
In other words, an exchange of metadata is required before peer-to-peer streaming of audio, video, or data can take place. This process is called signaling.
In the previous steps, the sender and receiver RTCPeerConnection objects are on the same page, so 'signaling' is simply a matter of passing metadata between objects.
In a real world application, the sender and receiver RTCPeerConnections run in web pages on different devices, and we need a way for them to communicate metadata.
For this, we use a signaling server: a server that can pass messages between WebRTC clients (peers). The actual messages are plain text: stringified JavaScript objects.
In this step we'll build a simple Node.js signaling server, using the Socket.IO Node module and JavaScript library for messaging. Experience with Node.js and Socket.IO will be useful, but not crucial; the messaging components are very simple.
In this example, the server (the Node application) is implemented in index.js, and the client that runs on it (the web app) is implemented in index.html.
The Node application in this step has two tasks.
First, it acts as a message relay:
socket.on('message', function (message) {
log('Got message: ', message);
socket.broadcast.emit('message', message);
});
Second, it manages WebRTC video chat 'rooms':
if (numClients === 1) {
socket.join(room);
socket.emit('created', room, socket.id);
} else if (numClients === 2) {
socket.join(room);
socket.emit('joined', room, socket.id);
io.sockets.in(room).emit('ready');
} else { // max two clients
socket.emit('full', room);
}
Our simple WebRTC application will permit a maximum of two peers to share a room.
Update index.html so it looks like this:
<!DOCTYPE html>
<html>
<head>
<title>Realtime communication with WebRTC</title>
<link rel="stylesheet" href="css/main.css" />
</head>
<body>
<h1>Realtime communication with WebRTC</h1>
<script src="/socket.io/socket.io.js"></script>
<script src="js/main.js"></script>
</body>
</html>
You won't see anything on the page in this step: all logging is done to the browser console. (To view the console in Chrome, press Ctrl-Shift-J, or Command-Option-J if you're on a Mac.)
Replace js/main.js with the following:
'use strict';
var isInitiator;
window.room = prompt("Enter room name:");
var socket = io.connect();
if (room !== "") {
console.log('Message from client: Asking to join room ' + room);
socket.emit('create or join', room);
}
socket.on('created', function(room, clientId) {
isInitiator = true;
});
socket.on('full', function(room) {
console.log('Message from client: Room ' + room + ' is full :^(');
});
socket.on('ipaddr', function(ipaddr) {
console.log('Message from client: Server IP address is ' + ipaddr);
});
socket.on('joined', function(room, clientId) {
isInitiator = false;
});
socket.on('log', function(array) {
console.log.apply(console, array);
});
For this and the following steps, you'll run Socket.IO on Node.
At the top level of your work directory create a file named package.json with the following contents:
{ "name": "webrtc-codelab", "version": "0.0.1", "description": "WebRTC codelab", "dependencies": { "node-static": "0.7.7", "socket.io": "1.2.0" } }
This is an app manifest that tells Node Package Manager (npm
) what project dependencies to install.
To install dependencies, run the following from the command line terminal in your work directory:
npm install
You should see an installation log that ends something like this:
As you can see, npm
has installed the dependencies defined in package.json.
You may get warnings, but if there are errors in red, ask for help!
Create a new file index.js at the top level of your work directory (not in the js directory) and add the following code:
'use strict';
var os = require('os');
var nodeStatic = require('node-static');
var http = require('http');
var socketIO = require('socket.io');
var fileServer = new(nodeStatic.Server)();
var app = http.createServer(function(req, res) {
fileServer.serve(req, res);
}).listen(8080);
var io = socketIO.listen(app);
io.sockets.on('connection', function(socket) {
// convenience function to log server messages on the client
function log() {
var array = ['Message from server:'];
array.push.apply(array, arguments);
socket.emit('log', array);
}
socket.on('message', function(message) {
log('Client said: ', message);
// for a real app, would be room-only (not broadcast)
socket.broadcast.emit('message', message);
});
socket.on('create or join', function(room) {
log('Received request to create or join room ' + room);
var numClients = io.sockets.sockets.length;
log('Room ' + room + ' now has ' + numClients + ' client(s)');
if (numClients === 1) {
socket.join(room);
log('Client ID ' + socket.id + ' created room ' + room);
socket.emit('created', room, socket.id);
} else if (numClients === 2) {
log('Client ID ' + socket.id + ' joined room ' + room);
io.sockets.in(room).emit('join', room);
socket.join(room);
socket.emit('joined', room, socket.id);
io.sockets.in(room).emit('ready');
} else { // max two clients
socket.emit('full', room);
}
});
socket.on('ipaddr', function() {
var ifaces = os.networkInterfaces();
for (var dev in ifaces) {
ifaces[dev].forEach(function(details) {
if (details.family === 'IPv4' && details.address !== '127.0.0.1') {
socket.emit('ipaddr', details.address);
}
});
}
});
});
From the command line terminal, run the following command in the work directory:
node index.js
From your browser, open localhost:8080.
Each time you open this URL, you will be prompted to enter a room name. To join the same room, choose the same room name each time, such as 'foo'.
Open a new tab page, and open localhost:8080 again. Choose the same room name.
Open localhost:8080 in a third tab or window. Choose the same room name again.
Check the console in each of the tabs: you should see the logging from the JavaScript above.
foo
.In this step, you learned how to:
A complete version of this step is in the step-04 folder.
Find out how to use signaling to enable two users to make a peer connection.
In this step you'll find out how to:
A complete version of this step is in the step-05 folder.
Replace the contents of index.html with the following:
<!DOCTYPE html>
<html>
<head>
<title>Realtime communication with WebRTC</title>
<link rel="stylesheet" href="/css/main.css" />
</head>
<body>
<h1>Realtime communication with WebRTC</h1>
<div id="videos">
<video id="localVideo" autoplay muted></video>
<video id="remoteVideo" autoplay></video>
</div>
<script src="/socket.io/socket.io.js"></script>
<script src="js/lib/adapter.js"></script>
<script src="js/main.js"></script>
</body>
</html>
Replace js/main.js with the contents of step-05/js/main.js.
If your Node server is not running, start it by calling the following command in the work directory
node index.js
(Make sure you're using the version of index.js from the previous step that implements Socket.IO.)
From your browser, open localhost:8080.
Open localhost:8080 again, in a new tab or window. One video element will display the local stream from getUserMedia()
and the other will show the 'remote' video streamed via RTCPeerconnection.
View logging in the browser console.
In this step you learned how to:
A complete version of this step is in the step-05 folder.
npm cache clean
from the command line.Find out how to take a photo, get the image data, and share that between remote peers.
In this step you'll learn how to:
A complete version of this step is in the step-06 folder.
Previously you learned how to exchange text messages using RTCDataChannel.
This step makes it possible to share entire files: in this example, photos captured via getUserMedia()
.
The core parts of this step are as follows:
getUserMedia()
:var video = document.getElementById('video');
function grabWebCamVideo() {
console.log('Getting user media (video) ...');
navigator.mediaDevices.getUserMedia({
audio: false,
video: true
})
.then(gotStream)
.catch(function(e) {
alert('getUserMedia() error: ' + e.name);
});
}
canvas
element:var photo = document.getElementById('photo');
var photoContext = photo.getContext('2d');
function snapPhoto() {
photoContext.drawImage(video, 0, 0, photo.width, photo.height);
show(photo, sendBtn);
}
function sendPhoto() {
// Split data channel message in chunks of this byte length.
var CHUNK_LEN = 64000;
var img = photoContext.getImageData(0, 0, photoContextW, photoContextH),
len = img.data.byteLength,
n = len / CHUNK_LEN | 0;
console.log('Sending a total of ' + len + ' byte(s)');
dataChannel.send(len);
// split the photo and send in chunks of about 64KB
for (var i = 0; i < n; i++) {
var start = i * CHUNK_LEN,
end = (i + 1) * CHUNK_LEN;
console.log(start + ' - ' + (end - 1));
dataChannel.send(img.data.subarray(start, end));
}
// send the reminder, if any
if (len % CHUNK_LEN) {
console.log('last ' + len % CHUNK_LEN + ' byte(s)');
dataChannel.send(img.data.subarray(n * CHUNK_LEN));
}
}
function receiveDataChromeFactory() {
var buf, count;
return function onmessage(event) {
if (typeof event.data === 'string') {
buf = window.buf = new Uint8ClampedArray(parseInt(event.data));
count = 0;
console.log('Expecting a total of ' + buf.byteLength + ' bytes');
return;
}
var data = new Uint8ClampedArray(event.data);
buf.set(data, count);
count += data.byteLength;
console.log('count: ' + count);
if (count === buf.byteLength) {
// we're done: all data chunks have been received
console.log('Done. Rendering photo.');
renderPhoto(buf);
}
};
}
function renderPhoto(data) {
var canvas = document.createElement('canvas');
canvas.width = photoContextW;
canvas.height = photoContextH;
canvas.classList.add('incomingPhoto');
// trail is the element holding the incoming images
trail.insertBefore(canvas, trail.firstChild);
var context = canvas.getContext('2d');
var img = context.createImageData(photoContextW, photoContextH);
img.data.set(data);
context.putImageData(img, 0, 0);
}
Replace the contents of your work folder with the contents of step-06. Your index.html file in work should now look like this:
<!DOCTYPE html>
<html>
<head>
<title>Realtime communication with WebRTC</title>
<link rel="stylesheet" href="/css/main.css" />
</head>
<body>
<h1>Realtime communication with WebRTC</h1>
<h2>
<span>Room URL: </span><span id="url">...</span>
</h2>
<div id="videoCanvas">
<video id="camera" autoplay></video>
<canvas id="photo"></canvas>
</div>
<div id="buttons">
<button id="snap">Snap</button><span> then </span><button id="send">Send</button>
<span> or </span>
<button id="snapAndSend">Snap & Send</button>
</div>
<div id="incoming">
<h2>Incoming photos</h2>
<div id="trail"></div>
</div>
<script src="/socket.io/socket.io.js"></script>
<script src="js/lib/adapter.js"></script>
<script src="js/main.js"></script>
</body>
</html>
If your Node server is not running, start it by calling the following command from your work directory:
node index.js
(Make sure you're using the version of index.js that implements Socket.IO — and remember to restart your Node server if you make changes.)
If necessary, click on the Allow button to allow the app to use your webcam.
The app will create a random room ID and add that ID to the URL. Open the URL from the address bar in a new browser tab or window.
Click the Snap & Send button and then look at the Incoming area in the other tab at the bottom of the page. The app transfers photos between tabs.
You should see something like this:
A complete version of this step is in the step-06 folder.
You built an app to do realtime video streaming and data exchange!
In this codelab you learned how to: