summaryrefslogtreecommitdiff
path: root/voctocore/README.md
diff options
context:
space:
mode:
Diffstat (limited to 'voctocore/README.md')
-rw-r--r--voctocore/README.md9
1 files changed, 5 insertions, 4 deletions
diff --git a/voctocore/README.md b/voctocore/README.md
index e3e62f4..b7b5242 100644
--- a/voctocore/README.md
+++ b/voctocore/README.md
@@ -3,7 +3,7 @@
## Design goals
Our Design is heavily influenced by gst-switch. We wanted a small videomixer core-process, whose sourcecode can be read and understand in about a weekend.
All Sources (Cameras, Slide-Grabbers) and Sinks (Streaming, Recording) should be separate Processes. As far as possible we wanted to reuse our existing and well-tested ffmpeg Commandlines for streaming and recording. It should be possible to connect additional Sinks at any time while the number of Sources is predefined in our Setup. All sources and sinks should be able to die and get restarted without taking the core process down.
-While Sources ans Sinks all run on the same Machine, Control- or Monitoring-Clients, for example a GUI, should be able to run on a different machine and connect to the core-process via Gigabit Ethernet. The core-process should be controllable by a very simple protocol which can easily be scripted or spoken with usual networking tools.
+While Sources and Sinks all run on the same Machine, Control- or Monitoring-Clients, for example a GUI, should be able to run on a different machine and connect to the core-process via Gigabit Ethernet. The core-process should be controllable by a very simple protocol which can easily be scripted or spoken with usual networking tools.
## Design decisions
To meet our goal of "read and understand in about a weekend" python was chosen as language for the high-level parts, with [GStreamer](http://gstreamer.freedesktop.org/) for the low-level media handling. GStreamer can be controlled via the [PyGI](https://wiki.gnome.org/action/show/Projects/PyGObject) bindings from Python.
@@ -11,7 +11,7 @@ As an Idea borrowed from gst-switch, all Video- and Audio-Streams to and from th
The ubiquitous Input/Output-Format into the core-process is therefore Raw UYVY Frames and Raw S16LE Audio in a Matroska container for Timestamping via TCP over localhost. Network handling is done in python, because it allows for greater flexibility. After the TCP connection is ready, its file descriptor is passed to GStreamer which handles the low-level read/write operations. To be able to attach/detach sinks, the `multifdsink`-Element can be used. For the Sources it's more complicated:
-When a source is not connected, its video and audio stream must be substituted with black frames ans silence, to that the remaining parts of the pipeline can keep on running. To achive this, a separate GStreamer-Pipeline is launched for an incoming Source-Connection and destroyed in case of a disconnect or an error. To pass Video -and Audio-Buffers between the Source-Pipelines and the other parts of the Mixer, we make use of the `inter(audio/video)(sink/source)`-Elements. `intervideosrc` and `interaudiosrc` implement the creation of black frames and silence, in case no source is connected or the source broke down somehow.
+When a source is not connected, its video and audio stream must be substituted with black frames and silence, to that the remaining parts of the pipeline can keep on running. To achive this, a separate GStreamer-Pipeline is launched for an incoming Source-Connection and destroyed in case of a disconnect or an error. To pass Video -and Audio-Buffers between the Source-Pipelines and the other parts of the Mixer, we make use of the `inter(audio/video)(sink/source)`-Elements. `intervideosrc` and `interaudiosrc` implement the creation of black frames and silence, in case no source is connected or the source broke down somehow.
If enabled in Config, the core process offers two formats for most outputs: Raw-Frames in mkv as described above, which should be used to feed recording or streaming processes running on the same machine. For the GUI which usually runs on a different computer, they are not suited because of the bandwidth requirements (1920×1080 UYVY @25fps = 791 MBit/s). For this reason the Servers offers Preview-Ports for each Input and the Main-Mix, which serves the same content, but the video frames there are jpeg compressed, combined with uncompressed S16LE audio and encapsulated in mkv.
@@ -147,8 +147,9 @@ They can be used to Implement Features like a "Cut-Button" in the GUI. When Clic
On Startup the Video-Mixer reads the following Configuration-Files:
- `<install-dir>/default-config.ini`
- `<install-dir>/config.ini`
- - `/etc/voctomix.ini`
- - `<homedir>/.voctomix.ini`
+ - `/etc/voctomix/voctocore.ini`
+ - `/etc/voctocore.ini`
+ - `<homedir>/.voctocore.ini`
- `<File specified on Command-Line via --ini-file>`
From top to bottom the individual Settings override previous Settings. `default-config.ini` should not be edited, because a missing Setting will result in an Exception.