jamulus/src/recorder/cwavestream.h
Peter L Jones 8c1deffda7 Add recording support with Reaper Project generation
Includes the following changes

* Initial .gitignore
Administrative

* Fix up warning message
* Not all Windows file systems are case insensitive
Bugfixes

* (Qt5) Use QCoreApplication for headless
Possible solution to get the application to run as a headless server but it loses the nice history graph, so not ideal.

* Avoid ESC closing chat
Because ESC shouldn't close the chat window. Or the main app window.

* Add console logging support for Windows
Whilst looking for the headless support, I found this idea for Windows logging.  New improved version.  This makes far fewer changes.

----

* Add recording support with Reaper Project generation
The main feature!
    * New -r option to enable recording of PCM files and conversion to Reaper RPP with WAV files
    * New -R option to set the directory in which to create recording sessions
    You need to specify the -R option, there's no default... so I guess -r and -R could be combined.
    * New -T option to convert a session directory with PCM files into a Reaper RPP with WAV files
    You can use -T on "failed" sessions, if the -r option captures the PCMs but the RPP converter doesn't run for some reaon. (It was useful during development, maybe less so once things seem stable.)

The recorder is implemented as a new thread with queuing from the main "real time" server thread.

When a new client connects or if its audio format changes (e.g. mono to stereo), a new RIFF WAVE file is started.  Each frame of decompressed audio for each client written out as LPCM to the file.  When the client disconnects, the RIFF WAVE headers are updated to reflect the file length.

Once all clients disconnect, the session is considered ended and a Reaper RPP file is written.
2019-04-03 18:12:45 +01:00

70 lines
2.1 KiB
C++
Executable File

#ifndef CWAVESTREAM_H
#define CWAVESTREAM_H
#include <QDataStream>
namespace recorder {
class HdrRiff
{
public:
HdrRiff() {}
static const uint32_t chunkId = 0x46464952; // RIFF
static const uint32_t chunkSize = 0xffffffff; // (will be overwritten) Size of file in bytes - 8 = size of data + 36
static const uint32_t format = 0x45564157; // WAVE
};
class FmtSubChunk
{
public:
FmtSubChunk(const uint16_t _numChannels) :
numChannels (_numChannels)
, byteRate (sampleRate * numChannels * bitsPerSample/8)
, blockAlign (numChannels * bitsPerSample/8)
{
}
static const uint32_t chunkId = 0x20746d66; // "fmt "
static const uint32_t chunkSize = 16; // bytes in fmtSubChunk after chunkSize
static const uint16_t audioFormat = 1; // PCM
const uint16_t numChannels; // 1 for mono, 2 for joy... uh, stereo
static const uint32_t sampleRate = 48000; // because it's Jamulus
const uint32_t byteRate; // sampleRate * numChannels * bitsPerSample/8
const uint16_t blockAlign; // numChannels * bitsPerSample/8
static const uint16_t bitsPerSample = 16;
};
class DataSubChunkHdr
{
public:
DataSubChunkHdr() {}
static const uint32_t chunkId = 0x61746164; // "data"
static const uint32_t chunkSize = 0xffffffff; // (will be overwritten) Size of data
};
class CWaveStream : public QDataStream
{
public:
CWaveStream(const uint16_t numChannels);
explicit CWaveStream(QIODevice *iod, const uint16_t numChannels);
CWaveStream(QByteArray *iod, QIODevice::OpenMode flags, const uint16_t numChannels);
CWaveStream(const QByteArray &ba, const uint16_t numChannels);
~CWaveStream();
private:
void waveStreamHeaders();
const uint16_t numChannels;
const int64_t initialPos;
const ByteOrder initialByteOrder;
};
}
recorder::CWaveStream& operator<<(recorder::CWaveStream& out, recorder::HdrRiff& hdrRiff);
recorder::CWaveStream& operator<<(recorder::CWaveStream& out, recorder::FmtSubChunk& fmtSubChunk);
recorder::CWaveStream& operator<<(recorder::CWaveStream& out, recorder::DataSubChunkHdr& dataSubChunkHdr);
#endif // CWAVESTREAM_H