Processses

Process is used to encapsulate an algorithm. It allows developer to define the input and output interfaces for the algorithm, configure the algorithm and for a given input, define a step function that applies the algorithm on the input. For example, a classifier process would take an image as input and produce a vector of class scores as output. The model file would be used to configure the algorithm. The step function for this process would be the processing logic of the classifier for the given input.

DIVA currently supports C++ and Python processes. The interface in both languages is fairly similar. The new process inherits KwiverProcess in Python and sprokit::process in C++. For the purpose of configuring the process, configuration parameter is taken as input by the process. These configuration can be specified at runtime by the pipelines. To define input and output interfaces of a process port traits are used. A process can declare any ports that have been defined input/output ports list . The ports can be declared as required or optional using port flags based on the process requirement.

Python

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
class ClassifierProcess(KwiverProcess):
    def __init__(self, conf):
        KwiverProcess.__init__(self, conf)

        # declare configuration
        self.add_config_trait("model_file", "model_file",
                                'dummy.model', 'Model file for the classifier')
        self.declare_config_using_trait('model_file')

        # set up flags
        required = process.PortFlags()
        required.add(self.flag_required)
        optional = process.PortFlags()
        
        # declare ports
        self.declare_input_port_using_trait('image', required)
        self.declare_input_port_using_trait('file_name', optional )
        self.declare_output_port_using_trait( 'double_vector', required );

C++

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
namespace diva {

create_config_trait( model_file, vital::path_t, "dummy.model",
                "Model file for the classifier" );
class DIVA_CLASSIFIER_PROCESSES_NO_EXPORT classifier_process
  : public sprokit::process
{
public:
  classifier_process( kwiver::vital::config_block_sptr const& config ) 
      : process( config )
      : d( new classifier_process::priv )
  {
  
    declare_config_using_trait( model_file );
    // Set up flags
    sprokit::process::port_flags_t optional;
    sprokit::process::port_flags_t required;
    required.insert( flag_required );

    // Declare input ports
    declare_input_port_using_trait( image, required );
    declare_input_port_using_trait( file_name, optional );

    declare_output_port_using_trait( double_vector, required );
  }

The process can override _configure and _step function to implment the algorithm. _configure is primarily used for one time setup steps creating a model object from model definition and loading the weight file to the object. _step function is used to implment the core processsing logic of the algorithm. For the classifier process, this would be the forward pass of the classifier. The output of the classifier would be a vector of double that can be push out of the process.

Python

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
    def _configure(self):
        # Configure the process
        self.classifier = Classifier(self.config_value("model_file"))

    def _step(self):
        # Step Function for the process
        img_container = self.grab_input_using_trait('image')
        video_name = self.grab_input_using_trait('file_name')
        # Classify the image
        class_score = self.classifier.classify(img_container.image())
        # Push results to port
        self.push_to_port_using_trait('double_vector', class_score)

C++

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
  void _configure()
  {
    scoped_configure_instrumentation();
    d->classifer = ClassifierModel(config_value_using_trait( model_file ); 
  }

  void _step()
  {
    auto image_container = grab_from_port_using_trait( image );
    auto file_name = grab_from_port_using_trait( file_name ); 
    std::vector<double> output_classes = d->classifier.classify( image_container->get_image() );
    push_to_port_using_trait( output_classes );
  }

Since input handling has been completely decoupled from the algorithm, different input source can be plugged in without making any changes to the classifier. Additionally, Kwiver supports abstract algorithms that can configured to chose an implementation of the algorithm during runtime. Thus the abstract classifier would be replaced by a concreate implementation like InceptionNet based on the user’s choice at runtime.

Loose Integration

Now that we have a working process, we need the Kwiver Tools to detect the process. To this end we would be registering the process with sprokit using __sprokit_register__ in Python and register_factories method in C++.

Python

1
2
3
4
5
6
7
8
def __sprokit_register__():
    from sprokit.pipeline import process_factory
    module_name = 'python:kwiver.ClassifierSample'
    if process_factory.is_process_module_loaded(module_name):
        return
    process_factory.add_process('ClassifierSample', 'Dummy Classifier', 
                                ClassifierProcess)
    process_factory.mark_process_module_as_loaded(module_name)

C++

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
#include <sprokit/pipeline/process_factory.h>
#include <vital/plugin_loader/plugin_loader.h>

// -- list processes to register --
#include "sample_process.cxx"

DIVA_SAMPLE_PROCESS_EXPORT
void
register_factories( kwiver::vital::plugin_loader& vpm )
{
  static auto const module_name = kwiver::vital::plugin_manager::module_t( "ClassifierSample" );

  if ( sprokit::is_process_module_loaded( vpm, module_name ) )
  {
    return;
  }

  // -------------------------------------------------------------------------------------
  auto fact = vpm.ADD_PROCESS( diva::classfier_process);

  fact->add_attribute( kwiver::vital::plugin_factory::PLUGIN_NAME, "ClassifierSample" )
    .add_attribute( kwiver::vital::plugin_factory::PLUGIN_MODULE_NAME, module_name )
    .add_attribute( kwiver::vital::plugin_factory::PLUGIN_DESCRIPTION,
                    "Dummy Classifier" )
    .add_attribute( kwiver::vital::plugin_factory::PLUGIN_VERSION, "1.0" )
    ;


  // - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
  sprokit::mark_process_module_as_loaded( vpm, module_name );
} // register_processes

You can use plugin_explorer tool provided by Kwiver to check if the registration was successful. All available plugin are displayed by plugin explorer.

Note

If your python process resides outside processes/python or you add a new directory in processes/python, you would have to modify the setup scripts in CMake directory.

Note

If your algorithm uses libraries that are not available in the default paths of the system, you would have to write a setup script to set the correct enviornment variables. This setup script requirement is the primary limitation of loose integration that would be overcome once an algorithm tightly integrated in DIVA.

Complete Process Definition

Python

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
from sprokit.pipeline import process
from kwiver.kwiver_process import KwiverProcess

class ClassifierProcess(KwiverProcess):
    def __init__(self, conf):
        KwiverProcess.__init__(self, conf)

        # declare configuration
        self.add_config_trait("model_file", "model_file",
                                'dummy.model', 'Model file for the classifier')
        self.declare_config_using_trait('model_file')

        # set up flags
        required = process.PortFlags()
        required.add(self.flag_required)
        optional = process.PortFlags()
        
        # declare ports
        self.declare_input_port_using_trait('image', required)
        self.declare_input_port_using_trait('file_name', optional )
        self.declare_output_port_using_trait( 'double_vector', required );

    def _configure(self):
        # Configure the process
        self.classifier = Classifier(self.config_value("model_file"))

    def _step(self):
        # Step Function for the process
        img_container = self.grab_input_using_trait('image')
        video_name = self.grab_input_using_trait('file_name')
        # Classify the image
        class_score = self.classifier.classify(img_container.image())
        # Push results to port
        self.push_to_port_using_trait('double_vector', class_score)



def __sprokit_register__():
    from sprokit.pipeline import process_factory
    module_name = 'python:kwiver.ClassifierSample'
    if process_factory.is_process_module_loaded(module_name):
        return
    process_factory.add_process('ClassifierSample', 'Dummy Classifier', 
                                ClassifierProcess)
    process_factory.mark_process_module_as_loaded(module_name)

C++

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
#ifndef CLASSIFIER_PROCESS_H
#define CLASSIFIER_PROCESS_H
#include <sprokit/pipeline/process.h>
#include <processes/diva_classifier_process_export.h>
#include <vital/vital_types.h>

namespace diva {

create_config_trait( model_file, vital::path_t, "dummy.model",
                "Model file for the classifier" );
class DIVA_CLASSIFIER_PROCESSES_NO_EXPORT classifier_process
  : public sprokit::process
{
public:
  classifier_process( kwiver::vital::config_block_sptr const& config ) 
      : process( config )
      : d( new classifier_process::priv )
  {
  
    declare_config_using_trait( model_file );
    // Set up flags
    sprokit::process::port_flags_t optional;
    sprokit::process::port_flags_t required;
    required.insert( flag_required );

    // Declare input ports
    declare_input_port_using_trait( image, required );
    declare_input_port_using_trait( file_name, optional );

    declare_output_port_using_trait( double_vector, required );
  }
protected:
  void _configure()
  {
    scoped_configure_instrumentation();
    d->classifer = ClassifierModel(config_value_using_trait( model_file ); 
  }

  void _step()
  {
    auto image_container = grab_from_port_using_trait( image );
    auto file_name = grab_from_port_using_trait( file_name ); 
    std::vector<double> output_classes = d->classifier.classify( image_container->get_image() );
    push_to_port_using_trait( output_classes );
  }

private:
  class priv
  {
    
  };
  const std::unique_ptr<priv> d;
}; // end class classifier_process

}  // end namespace

#endif // CLASSIFIER_PROCESS_H

Tight Integration

Note

At the moment, only C++ can be used to tightly integrate an algorithm with the framework

Activity Detectors

Since supporting the development of activity detector is the primary objective of DIVA, this section presents the algorithm present in the framework. The processs in this section and the subsequent section are a small subset of the processes available through Kwiver. A more detailed list of processes is available here.

Temporal Localizers

The activity detectors in this class detect the temporal bound of the activities in an unbounded video.

Spatial Temporal Localizers

The activity detectors in this class detect the spatial and temporal bound of the activities in an unbounded video. They can be paired with an object detector/tracker to detect/track the participating objects

Utility Processes

Input

class diva_experiment_process : public process

Parse experiment file to provide input for other processes.

  • Input Ports
    • None
  • Output Ports
    • image Image obtained from experiment source specified in the experiment file (Required)
    • timestamp Frame number associated with the image (Required)
    • file_name Input source (Required)
  • Configuration
    • experiment_file_name DIVA experiment file

Optical Flow

class optical_flow_process : public process

Online GPU based Optical Flow using Opencv’s Brox Optical Flow using successive images.

  • Input Ports
    • image Image obtained input source (Required)
    • timestamp Frame number associated with the image (Required)
  • Output Ports
    • image RGB representation of the optical flow image (Required)
  • Configuration
    • output_image_width Width of the output image
    • output_image_height Height of the output image

Multi Object Trackers (Coming Soon!)