Skip to content

Commit

Permalink
Merge pull request #672 from zivsha/pipeline_changes
Browse files Browse the repository at this point in the history
Pipeline changes
  • Loading branch information
dorodnic committed Oct 4, 2017
2 parents 0f67746 + 21d31a5 commit 07e2627
Show file tree
Hide file tree
Showing 35 changed files with 2,253 additions and 1,178 deletions.
4 changes: 2 additions & 2 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -76,8 +76,8 @@ script:
sudo "PATH=$PATH" make install;

export LRS_LOG_LEVEL="DEBUG";
wget http://realsense-hw-public.s3.amazonaws.com/rs-tests/awgc4;
./unit-tests/live-test from awgc4 -d yes;
wget http://realsense-hw-public.s3.amazonaws.com/rs-tests/live_test_awgc;
./unit-tests/live-test from live_test_awgc -d yes;
fi
- if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then
mkdir build && cd build;
Expand Down
33 changes: 20 additions & 13 deletions CMake/realsense.def
Original file line number Diff line number Diff line change
Expand Up @@ -198,25 +198,32 @@ EXPORTS
rs2_playback_device_set_status_changed_callback
rs2_playback_device_get_current_status
rs2_playback_device_set_playback_speed
rs2_playback_device_stop
rs2_playback_device_stop

rs2_create_align

rs2_create_pipeline
rs2_pipeline_get_device
rs2_start_pipeline_with_callback
rs2_start_pipeline_with_callback_cpp
rs2_start_pipeline
rs2_open_pipeline
rs2_stop_pipeline
rs2_enable_pipeline_stream
rs2_enable_pipeline_device
rs2_disable_stream_pipeline
rs2_disable_all_streams_pipeline
rs2_pipeline_stop
rs2_pipeline_wait_for_frames
rs2_pipeline_poll_for_frames
rs2_pipeline_get_active_streams
rs2_pipeline_get_stream_type_selection
rs2_delete_pipeline
rs2_pipeline_start
rs2_pipeline_start_with_config
rs2_pipeline_get_active_profile
rs2_pipeline_profile_get_device
rs2_pipeline_profile_get_streams
rs2_delete_pipeline_profile
rs2_create_config
rs2_delete_config
rs2_config_enable_stream
rs2_config_enable_all_stream
rs2_config_enable_device
rs2_config_enable_device_from_file
rs2_config_enable_record_to_file
rs2_config_disable_stream
rs2_config_disable_all_streams
rs2_config_resolve
rs2_config_can_resolve

rs2_create_device_hub
rs2_device_hub_is_device_connected
Expand Down
6 changes: 3 additions & 3 deletions appveyor.yml
Original file line number Diff line number Diff line change
Expand Up @@ -32,14 +32,14 @@ build:
verbosity: minimal
test_script:
- ps: >-
$url = "http://realsense-hw-public.s3.amazonaws.com/rs-tests/awgc4"
$url = "http://realsense-hw-public.s3.amazonaws.com/rs-tests/live_test_awgc"
$output = "C:/projects/librealsense/build/Debug/awgc4"
$output = "C:/projects/librealsense/build/Debug/live_test_awgc"
Invoke-WebRequest -Uri $url -OutFile $output
& cmd.exe /c C:/projects/librealsense/build/Debug/live-test.exe from C:/projects/librealsense/build/Debug/awgc4 -d yes
& cmd.exe /c C:/projects/librealsense/build/Debug/live-test.exe from C:/projects/librealsense/build/Debug/live_test_awgc -d yes
if ($LASTEXITCODE -ne 0)
Expand Down
38 changes: 21 additions & 17 deletions examples/align/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ Next, we declare three functions to help the code look clearer:
```cpp
void render_slider(rect location, float& clipping_dist);
void remove_background(rs2::video_frame& color, const rs2::depth_frame& depth_frame, float depth_scale, float clipping_dist);
bool try_get_depth_scale(rs2::pipeline& p, float& scale);
bool try_get_depth_scale(rs2::device dev, float& scale);
```
`render_slider(..)` is where all the GUI code goes, and we will not cover this function in this overview.
Expand All @@ -70,19 +70,12 @@ We first define some variables that will be used to show the window and render t
texture renderer; // Helper for renderig images
```

Then, we create a context:
Then, we create an `align` object:

```cpp
// Create a context
rs2::context ctx;
```
`rs2::context` encapsulates all of the devices and sensors, and provides some additional functionalities.
In this example we will use the `context` to create an `rs2::align` object:

```cpp
// Using the context to create a rs2::align object.
// rs2::align allows you to perform alignment of depth frames to others
rs2::align align = ctx.create_align(RS2_STREAM_COLOR);
// Using the context to create a rs2::align object.
// rs2::align allows you to perform alignment of depth frames to others
rs2::align align(RS2_STREAM_COLOR);
```
`rs2::align` is a utility class that performs image alignment (registration) of 2 frames. Basically, each pixel from the first image will be transformed so that it matches its corresponding pixel in the second image.
Expand All @@ -94,16 +87,27 @@ Next, we define a `rs2::pipeline` which is a top level API for using RealSense d
```cpp
// Create a pipeline to easily configure and start the camera
rs2::pipeline pipe;
pipe.start(align);
//Calling pipeline's start() without any additional parameters will start the first device
// with its default streams.
//The start function returns the pipeline profile which the pipeline used to start the device
rs2::pipeline_profile profile = pipe.start();
```
we pass the `align` object to `Pipeline::start` as the post processing handler for incoming frames that the pipeline produces. This means that frames will first be processed by the pipeline, and once the pipeline is ready to output them, it will pass them on to `align` which will perform additional processing.

`rs2::align` contains an overload for `operator()` which takes a `rs2::frame`. This allows us to pass it as the argument for `pipeline::start`. The pipeline synchronizes its frames before publishing them, thus, by calling `start` with an `align` object, the frames it will provide will be synchronized and aligned to one another. We will soon see how to get these frames from the `align` object.

At this point of the program the camera is configured and the resulted frames are being aligned.
At this point of the program the camera is configured and streams are available from the pipeline.

Before actually using the frames, we try to get the depth scale units of the depth camera. Depth scale units are used to convert the depth pixel data (16-bit unsigned) into metric units.

```cpp
// Each depth camera might have different units for depth pixels, so we get it here
float depth_scale;
//Using the pipeline's profile, we can retrieve the device that the pipeline uses
if (!try_get_depth_scale(profile.get_device(), depth_scale))
{
std::cerr << "Device does not have a depth sensor" << std::endl;
return EXIT_FAILURE;
}
```

These units are expressed as depth in meters corresponding to a depth value of 1. For example if we have a depth pixel with a value of 2 and the depth scale units are 0.5 then that pixel is `2 X 0.5 = 1` meter away from the camera.

Now comes the interesting part of the application. We start our main loop, which breaks only when the window is closed:
Expand Down
17 changes: 8 additions & 9 deletions examples/align/rs-align.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@

void render_slider(rect location, float& clipping_dist);
void remove_background(rs2::video_frame& color, const rs2::depth_frame& depth_frame, float depth_scale, float clipping_dist);
bool try_get_depth_scale(rs2::pipeline& p, float& scale);
bool try_get_depth_scale(rs2::device dev, float& scale);
int main(int argc, char * argv[]) try
{
// Create and initialize GUI related objects
Expand All @@ -30,13 +30,15 @@ int main(int argc, char * argv[]) try

// Create a pipeline to easily configure and start the camera
rs2::pipeline pipe;

// By passing align to Pipeline::start, frames will be passed to align's handler
pipe.start();
//Calling pipeline's start() without any additional parameters will start the first device
// with its default streams.
//The start function returns the pipeline profile which the pipeline used to start the device
rs2::pipeline_profile profile = pipe.start();

// Each depth camera might have different units for depth pixels, so we get it here
float depth_scale;
if (!try_get_depth_scale(pipe, depth_scale))
//Using the pipeline's profile, we can retrieve the device that the pipeline uses
if (!try_get_depth_scale(profile.get_device(), depth_scale))
{
std::cerr << "Device does not have a depth sensor" << std::endl;
return EXIT_FAILURE;
Expand Down Expand Up @@ -113,11 +115,8 @@ catch (const std::exception & e)
return EXIT_FAILURE;
}

bool try_get_depth_scale(rs2::pipeline& p, float& scale)
bool try_get_depth_scale(rs2::device dev, float& scale)
{
// Get the underlying device from the pipeline
rs2::device dev = p.get_device();

// Go over the device's sensors
for (rs2::sensor& sensor : dev.query_sensors())
{
Expand Down
21 changes: 14 additions & 7 deletions examples/multicam/rs-multicam.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ class device_container
rs2::colorizer colorize_frame;
texture tex;
rs2::pipeline pipe;
rs2::pipeline_profile profile;
};

public:
Expand All @@ -43,23 +44,29 @@ class device_container
}
// Create a pipeline from the given device
rs2::pipeline p;
p.enable_device(serial_number);
rs2::config c;
c.enable_device(serial_number);
// Start the pipeline with the configuration
rs2::pipeline_profile profile = p.start(c);
// Hold it internally
_devices.emplace(serial_number, view_port{ {},{},{}, p });
_devices.emplace(serial_number, view_port{ {},{},{}, p, profile });

// Start the pipeline with the default configuration
p.start();
}

void remove_devices(const rs2::event_information& info)
{
std::lock_guard<std::mutex> lock(_mutex);
// Go over the list of devices and check if it was disconnected
for (auto itr = _devices.begin(); itr != _devices.end(); ++itr)
auto itr = _devices.begin();
while(itr != _devices.end())
{
if (info.was_removed(itr->second.pipe.get_device()))
if (info.was_removed(itr->second.profile.get_device()))
{
_devices.erase(itr);
itr = _devices.erase(itr);
}
else
{
++itr;
}
}
}
Expand Down
Loading

0 comments on commit 07e2627

Please sign in to comment.