Vital AI Documentation

  1. Installing the Vital AI Development Kit
    1. Download the Vital AI Development Kit
    2. Install the Vital AI Development Kit
    3. Uninstalling the Vital Development Kit
  2. Command Line Quick Reference
    1. VitalSigns
    2. Vital Utilities
    3. Vital Service
    4. Project Management
    5. VitalPredict
  3. Using VitalSigns
    1. VitalSigns Overview
    2. Installing VitalSigns
    3. VitalSigns Commands
  4. Managed Properties and Classes
  5. Unmanaged Properties and Classes
  6. The Vital Service API
    1. Two available interfaces
    2. Common Objects
    3. Vital Service API Calls
    4. Vital Service Admin API Calls
  7. VitalService Query Builder
    1. Concepts
    2. Basic Structure
    3. Property Types
      1. Comparators
    4. Parameters: Value Statements
    5. Available Parameters
    6. Constraints
    7. Provides Statements
    8. Constraint Containers
    9. Boolean Containers
    10. Top Level Containers
    11. ARC Containers
    12. Constraint Context
    13. Binding Names to Results
    14. Select Queries
    15. Results
    16. Graph Queries
    17. Results
    18. Aggregation Queries
    19. Native Queries
    20. Builder in Groovy, Java, Scala, and Spark
    21. Importing Domain Classes in Java, Scala, and Spark
    22. Supported Endpoints
  8. Vital Service Operations Builder
  9. VitalSigns Property Restriction Annotations
    1. GraphObject Validation
    2. Annotations
  10. Vital AI Platform Overview
  11. Hello World Example
    1. Add a new class to the data model for your new entity
    2. Run VitalSigns to create data model bindings
    3. Add a simple dictionary-based NLP extraction rule to the NLP processing step to recognize your new entity
    4. Deploy the Vital AI Platform including the NLP Processor using the new extraction rule
    5. Start a new project in your favorite IDE
    6. Add the Vital AI Client libraries to your project
    7. Add a new "main" function which calls the Vital Service, passes in some text, and prints out the results
  12. Creating a Grails News Application with Vital AI Platform
  13. Creating a Dashboard for the Vital AI Platform
    1. Steps to create the Grails News Dashboard
  14. Vital AI Ontology
  15. Others

1.Installing the Vital AI Development Kit #

1.1.Download the Vital AI Development Kit #

After signing up for an account on the vital.ai dashboard, subscribe to the Vital AI Development Kit (VDK) from the Products page. Under My Products, you will find downloads for a distribution of the VDK (tar.gz/zip), the VDK install script, the VDK uninstall script, VDK documentation, and a vital.ai license file which will enable you to use the VDK on your computer.

 

1.2.Install the Vital AI Development Kit #

On Mac OSX, Linux, or Cygwin

  • Download the following:

    1. VDK distribution (tar.gz/zip) (Do not decompress the distribution)

    2. VDK install script

    3. License file

  • Set your VITAL_HOME environment variable to a directory on your file system.

  • Add VITAL_HOME/vitalsigns/bin and VITAL_HOME/vitalservice/bin to your PATH environment variable.

For Mac OSX users, refer to: https://github.com/ersiner/osx-env-sync for information on synchronizing OSX environment variables for command line and GUI applications from a single source. Without this step, GUI applications will not be able to run using the VITAL_HOME environment variable.

  • In a terminal window, navigate to the directory into which you downloaded the VDK distribution and the VDK install script.

  • Render the VDK install script executable by entering:

chmod u+x ./vdk-install-0.2.302.sh
  • Install the VDK in your VITAL_HOME directory by entering:

./vdk-install-0.2.302.sh $VITAL_HOME .
  • Place the downloaded license file into the vital-license directory within your VITAL_HOME.

1.3.Uninstalling the Vital Development Kit #

On Mac OSX, Linux, or Cygwin

  • Download the vdk uninstall script.

  • In a terminal window, navigate to the directory into which you downloaded the VDK uninstall script.

  • Render the VDK uninstall script executable by entering:

chmod u+x ./vdk-uninstall-0.2.302.sh
  • Uninstall the VDK by entering:

./vdk-uninstall-0.2.302.sh $VITAL_HOME .

2.Command Line Quick Reference #

2.1.VitalSigns #

  • vitalsigns: tool to manage domain ontologies

Usage: vitalsigns <command> [options]
   where command is one of: [upversion, downversion, mergestatus, listindividuals, gitpostmerge, gitmergetool, diff, normalizeontology, purge, version, removeindividuals, deploy, help, gitdisable, checkin, mergeindividuals, merge, validateontology, verify, gitenable, gitjarmerge, generate, status, undeploy]


usage: vitalsigns generate [options]
 -j,--jar <arg>            (only groovy) optional output jar file path, by
                           default it will write domain jar at
                           $VITAL_HOME/domain-groovy-jar/app-groovy-versio
                           n.jar
 -js,--json-schema <arg>   (only json) optional output json schema file
                           path, by default it will write domain schema
                           json at
                           $VITAL_HOME/domain-json-schema/app-version.js
 -o,--ontology <arg>       OWL Domain Ontology File
 -or,--override            force to generate the new version even if no
                           changes detected or json schema exists
 -p,--package <arg>        (only groovy) package (ie.
                           com.your-domain.your-app-name), required if
                           ontology does not have default package property
 -t,--target <arg>         output target: 'groovy' or 'json', default
                           'groovy'


usage: vitalsigns version [options]
 -o,--ontology <arg>   OWL Domain Ontology File


usage: vitalsigns status (no options)
 -q,--quick     quick - skip parsing ontologies
 -v,--verbose   print warnings and other messages


usage: vitalsigns listindividuals [options]
 -o,--ontology <arg>   OWL Domain Ontology File


usage: vitalsigns mergeindividuals [options]
 -i,--individuals <arg>   individuals ontology file path
 -o,--ontology <arg>      ontology file name in
                          $VITAL_HOME/domain-ontology/


usage: vitalsigns removeindividuals [options]
 -i,--individuals <arg>   individuals ontology file path
 -o,--ontology <arg>      ontology file name in
                          $VITAL_HOME/domain-ontology/


usage: vitalsigns diff [options]
 -h,--history <arg>   show the Nth prior version found in the archive -
                      used with single ont param only
 -o,--ont <arg>       exactly 1 or 2 such params required: ontology file
                      path, when just file name used it will look in
                      $VITAL_HOME/domain-ontology/ *

usage: vitalsigns merge [options]
 -m,--merging <arg>   merging ontology
 -o,--ont <arg>       input ontology


usage: vitalsigns normalizeontology [options]
 -cb,--commentsbefore   comments inserted before, after if flag not
                        specified
 -o,--ont <arg>         ontology (will be replaced)


usage: vitalsigns validateontology [options]
 -o,--ont <arg>   input ontology


usage: vitalsigns verify (no options)



usage: vitalsigns gitenable (no options)



usage: vitalsigns gitdisable (no options)



usage: vitalsigns upversion [options]
 -o,--ont <arg>   input ontology, either file name or path to domain owl
                  in ($VITAL_HOME/domain-ontology/)


usage: vitalsigns downversion [options]
 -o,--ont <arg>       input ontology, either file name or path to domain
                      owl in ($VITAL_HOME/domain-ontology/)
 -v,--version <arg>   optional version to be reverted to, n.n.n, latest
                      used if not specified


usage: vitalsigns checkin [options]
 -o,--ont <arg>   external ontology file, must not be located in
                  $VITAL_HOME


usage: vitalsigns purge [options]
 -a,--app <arg>   owl and jar files prefix


usage: vitalsigns gitpostmerge (no options)



usage: vitalsigns mergestatus (no options)



usage: vitalsigns gitmergetool (options)
 -b,--base <arg>     base owl file path
 -l,--local <arg>    local owl file path
 -o,--output <arg>   output merged owl file path
 -r,--remote <arg>   remote owl file path


usage: vitalsigns gitjarmerge (options)
 -b,--base <arg>     base jar file path
 -l,--local <arg>    local jar file path
 -o,--output <arg>   output merged jar file path
 -r,--remote <arg>   remote jar file path


usage: vitalsigns deploy (options)
 -app,--application <arg>    Application ID
 -j,--jar <arg>              domain jar file path
 -js,--json-schema <arg>     json schema file path
 -o,--ontology <arg>         OWL Domain Ontology File
 -org,--organization <arg>   Organization ID


usage: vitalsigns undeploy (options)
 -j,--jar <arg>            domain jar file path
 -js,--json-schema <arg>   json schema file path
 -o,--ontology <arg>       OWL Domain Ontology File

2.2.Vital Utilities #

  • vitalimport: import data into Vital Service

usage: vitalimport [options]
 -b,--batch <arg>     blocks per batch, default: 1
 -c,--check           check input files - DOES NOT IMPORT
 -f,--file <arg>      input file or directory, supported extensions
                      .vital[.gz], .nt[.gz]
 -h,--help            Show usage information
 -s,--segment <arg>   target segment
 -v,--verbose         report import progress (only in non-big-files mode)
  • vitalexport: export data from Vital Service

usage: vitalexport [options]
 -b,--block <arg>     block size (only .vital[.gz]), default 10
 -h,--help            Show usage information
 -o,--output <arg>    output block file or remote temp file name,
                      supported extensions: .vital[.gz] .nt[.gz]
 -ow,--overwrite      overwrite output file
 -s,--segment <arg>   target segment
  • vitalconvert: convert data between Block, CSV, or N-Triples format

usage: vitalconvert [options]
 -h,--help            display usage
 -i,--input <arg>     input block, n-triple or csv file, valid extensions
                      .vital[.gz] .nt[.gz] .csv[.gz]
 -m,--map <arg>       map file, required with block -> csv conversion
 -o,--output <arg>    output block, n-trilple or csv file, , valid
                      extensions .vital[.gz] .nt[.gz] .csv[.gz]
 -oh,--outputHeader   prepend csv header (block->csv case only)
 -ow,--overwrite      overwrite output file if exists
 -sh,--skipHeader     skip input csv header (csv->block case only and map
                      file specified)
  • vitalmerge: Merge input block files into output block file

usage: vitalmerge [options]
 -i,--input <arg>    input block file
 -o,--output <arg>   output block file
 -or,--override      ignore ontology version conflicts and transform
                     global annotations into block annotations
 -ow,--overwrite     overwrite output file if exists
  • vitalquery: Quert Vital Service

usage: vitalquery [options]
 -g,--group <arg>        group graph matches into blocks, explicit boolean
                         flag parameter [true|false], requires mainProp,
                         only graph query
 -h,--help               Show usage information
 -mp,--mainProp <arg>    main bound property, required when --group=true
 -o,--output <arg>       output (.vital[.gz]|.sparql|.txt) block or sparql
                         or txt file (depending on -s flag and query
                         type), it prints to console otherwise, txt file
                         for select distinct case only
 -ow,--overwrite         overwrite output file
 -prof,--profile <arg>   vitalservice profile, default: default
 -q,--query <arg>        qurery file (.groovy|.builder) - groovy or query
                         builder defined query
 -s,--tosparql           output the query as sparql instead of executing
                         it

2.3.Vital Service #

  • vitaldatascript: manage and run datascripts and jobs

usage: vitaldatascript <command> [options] ...
usage: vitaldatascript help (prints usage)
usage: vitaldatascript listdatascripts [options]
 -p,--path <arg>         scripts base path: admin/ * <app>/ * or
                         commons/admin/ * or commons/scripts/ *
 -prof,--profile <arg>   vitalservice profile, default: default
usage: vitaldatascript listjobs [options]
 -p,--path <arg>         jobs base path: admin/ * <app>/ * or
                         commons/admin/ * or commons/scripts/ *
 -prof,--profile <arg>   vitalservice profile, default: default
usage: vitaldatascript getdatascript [options]
 -o,--output <arg>       optional output file to save the script body to
 -ow,--overwrite         overwrite output file if exists
 -p,--path <arg>         scripts base path: admin/<script_name>
                         <app>/<script_name> or
                         commons/admin/<script_name> or
                         commons/scripts/<script_name>
 -prof,--profile <arg>   vitalservice profile, default: default
usage: vitaldatascript adddatascript [options]
 -f,--file <arg>         script input file path
 -p,--path <arg>         script path: admin/<script_name>
                         <app>/<script_name> or
                         commons/admin/<script_name> or
                         commons/scripts/<script_name>
 -prof,--profile <arg>   vitalservice profile, default: default
usage: vitaldatascript removedatascript [options]
 -p,--path <arg>         script path: admin/<script_name>
                         <app>/<script_name> or
                         commons/admin/<script_name> or
                         commons/scripts/<script_name>
 -prof,--profile <arg>   vitalservice profile, default: default
usage: vitaldatascript rundatascript [options]
 -i,--input <arg>        input params groovy file - must return a map of
                         parameters
 -p,--path <arg>         script path: admin/<script_name>
                         <app>/<script_name> or
                         commons/admin/<script_name> or
                         commons/scripts/<script_name>
 -prof,--profile <arg>   vitalservice profile, default: default
usage: vitaldatascript listtasks [options]
 -prof,--profile <arg>   vitalservice profile, default: default
 -t,--taskID <arg>       optional taskID used as a filter
usage: vitaldatascript killtask [options]
 -prof,--profile <arg>   vitalservice profile, default: default
 -t,--taskID <arg>       taskID to kill
  • vitalftp: transfer file to Vital Service, get file from Vital Service, delete file on Vital Service

usage: vitalftp <command> [options] ...
usage: vitalftp help (prints usage)
usage: vitalftp put [options]
 -f,--file <arg>         local file to upload
 -ow,--overwrite         overwrite remote file if exists
 -prof,--profile <arg>   vitalservice profile, default: default
usage: vitalftp get [options]
 -d,--directory <arg>    output directory to save the file
 -n,--name <arg>         remote file name
 -ow,--overwrite         overwrite the output file if exists
 -prof,--profile <arg>   vitalservice profile, default: default
usage: vitalftp ls [options]
 -prof,--profile <arg>   vitalservice profile, default: default
usage: vitalftp del [options]
 -n,--name <arg>         remote file name
 -prof,--profile <arg>   vitalservice profile, default: default
usage: vitalftp purge (no options)
 -prof,--profile <arg>   vitalservice profile, default: default
  • vitallucene: manage Lucene implementation of Vital Service

usage: vitallucene <command> [options] ...
usage: vitallucene help (prints usage)
usage: vitallucene init [options]
 -f,--force              override existing directory
 -l,--location <arg>     either target directory or 'conf' value to use
                         service config
 -prof,--profile <arg>   vitalservice profile, default: default
usage: vitallucene listapps [options]
 -prof,--profile <arg>   vitalservice profile, default: default
usage: vitallucene addapp [options]
 -a,--app <arg>          app ID
 -n,--name <arg>         app name
 -prof,--profile <arg>   vitalservice profile, default: default
usage: vitallucene removeapp [options]
 -a,--app <arg>          app ID
 -prof,--profile <arg>   vitalservice profile, default: default
usage: vitallucene listsegments [options]
 -a,--appID <arg>        app ID
 -prof,--profile <arg>   vitalservice profile, default: default
usage: vitallucene removesegment [options]
 -a,--appID <arg>        app ID
 -d,--deleteData         delete data
 -prof,--profile <arg>   vitalservice profile, default: default
 -s,--segmentID <arg>    segment ID
usage: vitallucene addsegment [options]
 -a,--appID <arg>        app ID
 -prof,--profile <arg>   vitalservice profile, default: default
 -ro,--readOnly          read only
 -s,--segmentID <arg>    segment ID
 -t,--type <arg>         segment type: [disk, memory]
usage: vitallucene datamigrate [options]
 -a,--appID <arg>                     app ID
 -b,--builder <arg>                   builder file, .groovy or .builder
                                      extension
 -d,--direction <arg>                 [upgrade, dowgrade], required in
                                      builderless mode
 -dss,--delete-source-segment <arg>   [true, false] overrides
                                      deleteSourceSegment in a builder
 -h,--help                            display usage
 -i,--input <arg>                     overrides source segment in a
                                      builder
 -o,--output <arg>                    overrides destination segment in a
                                      builder
 -owlfile,--owl-file <arg>            older owl file name option, required
                                      in builderless mode
 -prof,--profile <arg>                vitalservice profile, default:
                                      default
  • vitalprime: manage VitalPrime Vital Service implementation

usage: vitalprime <command> [options] ...
usage: vitalprime help (prints usage)
usage: vitalprime listapps [options]
 -prof,--profile <arg>   vitalservice profile, default: default
usage: vitalprime addapp [options]
 -a,--app <arg>          app ID
 -n,--name <arg>         app name
 -prof,--profile <arg>   vitalservice profile, default: default
usage: vitalprime removeapp [options]
 -a,--app <arg>          app ID
 -prof,--profile <arg>   vitalservice profile, default: default
usage: vitalprime listsegments [options]
 -a,--appID <arg>        app ID
 -prof,--profile <arg>   vitalservice profile, default: default
usage: vitalprime removesegment [options]
 -a,--appID <arg>        app ID
 -d,--deleteData         delete data
 -prof,--profile <arg>   vitalservice profile, default: default
 -s,--segmentID <arg>    segment ID
usage: vitalprime addsegment [options]
 -a,--appID <arg>              app ID
 -p,--provisioningFile <arg>   optional  provisioning config file - used
                               when vitalprime hosts DynamoDB or IndexDB
                               with DynamoDB backend
 -prof,--profile <arg>         vitalservice profile, default: default
 -ro,--readOnly                read only
 -s,--segmentID <arg>          segment ID
 -t,--type <arg>               optional inner segment (endpoint) type,
                               required if prime hosts more than 1
                               endpoint
usage: vitalprime reindexsegment [options]
 -a,--appID <arg>        app ID
 -prof,--profile <arg>   vitalservice profile, default: default
 -s,--segmentID <arg>    segment ID
usage: vitalprime verifyindexes (no options)
 -prof,--profile <arg>   vitalservice profile, default: default
usage: vitalprime rebuildindexes (no options)
 -prof,--profile <arg>   vitalservice profile, default: default
usage: vitalprime status (no options)
 -prof,--profile <arg>   vitalservice profile, default: default
usage: vitalprime shutdown (no options)
 -prof,--profile <arg>   vitalservice profile, default: default
usage: vitalprime get [options]
 -a,--appID <arg>        app ID
 -o,--output <arg>       optional output block file .vital[.gz], by
                         default prints to console
 -prof,--profile <arg>   vitalservice profile, default: default
 -u,--uri <arg>          graph object URI
usage: vitalprime update [options]
 -a,--appID <arg>        app ID
 -i,--input <arg>        input block file with single block and single
                         graph object .vital[.gz]
 -prof,--profile <arg>   vitalservice profile, default: default
 -s,--segment <arg>      segment ID
usage: vitalprime insert [options]
 -a,--appID <arg>        app ID
 -i,--input <arg>        input block file with single block and single
                         graph object .vital[.gz]
 -prof,--profile <arg>   vitalservice profile, default: default
 -s,--segment <arg>      segment ID
usage: vitalprime delete [options]
 -a,--appID <arg>        app ID
 -prof,--profile <arg>   vitalservice profile, default: default
 -u,--uri <arg>          graph object URI
usage: vitalprime listmodels [options]
 -a,--appID <arg>        app ID
 -prof,--profile <arg>   vitalservice profile, default: default
usage: vitalprime deploy [options]
 -a,--appID <arg>        app ID
 -art,--artifact <arg>   artifact file name (jar/owl/js) or file path in
                         $VITAL_HOME/domain-groovy-jar/,
                         $VITAL_HOME/domain-json-schema/ or
                         $VITAL_HOME/domain-ontology/
 -prof,--profile <arg>   vitalservice profile, default: default
 -sa,--singleartifact    deploy this single artifact only
usage: vitalprime undeploy [options]
 -a,--appID <arg>        app ID
 -art,--artifact <arg>   artifact file name (jar/owl/js) or file path in
                         $VITAL_HOME/domain-groovy-jar/,
                         $VITAL_HOME/domain-json-schema/ or
                         $VITAL_HOME/domain-ontology/
 -prof,--profile <arg>   vitalservice profile, default: default
 -sa,--singleartifact    undeploy this single artifact only
usage: vitalprime load [options]
 -a,--appID <arg>        app ID
 -j,--jar <arg>          remote jar name
 -prof,--profile <arg>   vitalservice profile, default: default
usage: vitalprime unload [options]
 -a,--appID <arg>        app ID
 -j,--jar <arg>          remote jar name
 -prof,--profile <arg>   vitalservice profile, default: default
usage: vitalprime syncmodels [options]
 -a,--appID <arg>        app ID
 -d,--direction <arg>    direction, one of, pull (default), push, both
 -prof,--profile <arg>   vitalservice profile, default: default
usage: vitalprime datamigrate [options]
 -a,--appID <arg>                     app ID
 -b,--builder <arg>                   builder file, .groovy or .builder
                                      extension
 -d,--direction <arg>                 [upgrade, dowgrade], required in
                                      builderless mode
 -dss,--delete-source-segment <arg>   [true, false] overrides
                                      deleteSourceSegment in a builder
 -h,--help                            display usage
 -i,--input <arg>                     overrides source segment in a
                                      builder
 -o,--output <arg>                    overrides destination segment in a
                                      builder
 -owlfile,--owl-file <arg>            older owl file name option, required
                                      in builderless mode
 -prof,--profile <arg>                vitalservice profile, default:
                                      default
  • vitaldynamodb: Manage DynamoDB Vital Service implementation

usage: vitaldynamodb <command> [options] ...

usage: vitaldynamodb help (prints usage)

usage: vitaldynamodb init (no options)

usage: vitaldynamodb listapps [options]

usage: vitaldynamodb addapp [options]
 -a,--app <arg>    app ID
 -n,--name <arg>   app name

usage: vitaldynamodb removeapp [options]
 -a,--app <arg>   app ID

usage: vitaldynamodb listsegments [options]
 -a,--appID <arg>   app ID

usage: vitaldynamodb removesegment [options]
 -a,--appID <arg>       app ID
 -d,--deleteData        delete data
 -s,--segmentID <arg>   segment ID

usage: vitaldynamodb addsegment [options]
 -a,--appID <arg>              app ID
 -p,--provisioningFile <arg>   provisioning config file
 -ro,--readOnly                read only
 -s,--segmentID <arg>          segment ID
  • vitalsparqlstore: Manage sparqlstore Vital Service implementation

usage: vitalsparqlstore <command> [options] ...

usage: vitalsparqlstore help (prints usage)

usage: vitalsparqlstore listapps [options]

usage: vitalsparqlstore init

usage: vitalsparqlstore addapp [options]
 -a,--app <arg>    app ID
 -n,--name <arg>   app name

usage: vitalsparqlstore removeapp [options]
 -a,--app <arg>   app ID

usage: vitalsparqlstore listsegments [options]
 -a,--appID <arg>   app ID

usage: vitalsparqlstore removesegment [options]
 -a,--appID <arg>       app ID
 -d,--deleteData        delete data
 -s,--segmentID <arg>   segment ID

usage: vitalsparqlstore addsegment [options]
 -a,--appID <arg>       app ID
 -ro,--readOnly         read only
 -s,--segmentID <arg>   segment ID
  • vitalindexdb: Manage IndexDB Vital Service implementation (combined Index and Database)

usage: vitalindexdb <command> [options] ...

usage: vitalindexdb help (prints usage)

usage: vitalindexdb listapps [options]

usage: vitalindexdb init
 -f,--force   override existing directory

usage: vitalindexdb addapp [options]
 -a,--app <arg>    app ID
 -n,--name <arg>   app name

usage: vitalindexdb removeapp [options]
 -a,--app <arg>   app ID

usage: vitalindexdb listsegments [options]
 -a,--appID <arg>   app ID

usage: vitalindexdb removesegment [options]
 -a,--appID <arg>       app ID
 -d,--deleteData        delete data
 -s,--segmentID <arg>   segment ID

usage: vitalindexdb addsegment [options]
 -a,--appID <arg>              app ID
 -p,--provisioningFile <arg>   (dynamodb database type only) provisioning
                               config file
 -ro,--readOnly                read only
 -s,--segmentID <arg>          segment ID

usage: vitalindexdb reindexsegment [options]
 -a,--appID <arg>       app ID
 -s,--segmentID <arg>   segment ID

usage: vitalindexdb verifyindexes (no options)

usage: vitalindexdb rebuildindexes (no options)

2.4.Project Management #

  • vital-switch: Implementation found here: https://github.com/vital-ai/vital-scripts Simple script to change a soft link for location of VITAL_HOME to make it easy to switch between multiple installations.

usage: vital-switch <name>

2.5.VitalPredict #

vitalpredict

3.Using VitalSigns #

3.1.VitalSigns Overview #

VitalSigns is a component of the Vital AI Platform. VitalSigns allows a data model to be defined in an ontology using OWL and creates bindings for the data model in the various components of the Vital AI Platform. This allows the mobile or web application developer to be using the same data model as the data scientist running jobs in Hadoop. The data model is seamlessly used throughout the application, eliminating painful data transformations and a host of data integration issues.

3.2.Installing VitalSigns #

Installing the Vital AI Development Kit (VDK), will enable you to use VitalSigns. Since VitalSigns commands are run from a Linux or Unix shell, Microsoft Windows users will also need to install Cygwin, a Unix-like environment and command-line interface for Microsoft Windows.

3.3.VitalSigns Commands #

General Usage

Vital Signs commands have the following usage pattern:

vitalsigns <command> [options]

help

To obtain a summary of VitalSigns commands and their usage options enter:

vitalsigns help

generate

  1. To generate a domain jar from a domain ontology enter:

vitalsigns generate -o | --ontology <input ontology> -p | --package <package> -t groovy

where input ontology is either the file name or path to the input ontology and package is the name of your application’s package (ie. com.your-domain.your-app-name). The -p | –package option is required only if your ontology does not have a default pacakge property. By default, the domain jar will be written at $VITAL_HOME/domain-groovy-jar/app.version.jar.

To supply the jar file with an alternative name or to write it to a location other than default ($VITAL_HOME/domain-groovy-jar/), enter:

vitalsigns generate -j | --jar <output jar> -o | --ontology <input ontology> -p | --package <package> -t groovy

where output jar is the path to the output jar.

Note: To force vitalsigns to generate a new version of the jar file even if no changes have been made to the ontology, specify the -or | –override option.

  1. To generate a json schema from a domain ontology enter:

vitalsigns generate -o | --ontology <input ontology> -or | --override -p | --package <package> -t json

where input ontology is either the file name or path to the input ontology and package is the name of your application’s package (i.e. com.your-domain.your-app-name). The -p | –package option is required only if your ontology does not have a default package property. By default, the json schema will be written at $VITAL_HOME/domain-json/app.version.json.

To supply the json schema with an alternative name or to write it to a location other than default ($VITAL_HOME/domain-json/), enter:

vitalsigns generate -js | --json-schema <output json> -o | --ontology <input ontology> -p | --package <package> -t groovy

where output json is the path to the output json schema.

To force vitalsigns to generate a new version of the json schema even if no changes have been made to the ontology or a json schema already exists, specify the -or | –override option.

version

status

listindividuals

To list the individuals or “ground level” objects of an ontology enter:

vitalsigns listindividuals -o <input ontology>

where input ontology is either the file name or path to an input ontology (either external or within $VITAL_HOME).

For example, entering the command:

vitalsigns listindividuals -o vital-samples-0.1.0.owl

will produce the following output:

list individuals vital samples

mergeindividuals

removeindividuals

To remove all of an ontology’s individuals or “ground level” objects enter:

vitalsigns removeindividuals -i <ontology file path> - o <ontology file name>

where ontology file path is the path to the ontology from which the individuals are to be removed and ontology file name is the name of that ontolology in $VITAL_HOME/domain-ontology/.

When removeindividuals executes, the current version of the ontology is archived and a new onotlogy with the ontology’s individuals removed is generated.

If git is enabled, all changes are added to the staging area.

diff

merge

To merge two ontologies enter:

vitalsigns merge -m | --merging <merging ontology> -o | --ont <input ontology>

where merging ontology is either the file name or path to the merging ontology (external or within $VITAL_HOME) and input ontology is either the file name or path to the input ontology within $VITAL_HOME.

When merge executes, the current version of the input ontology is archived and a new higher-numbered version of the ontology with merged content is generated.

Following a merge, it may be nescessary to manually resolve any merge conflicts.

normalizeontology

validateontology

The VitalSigns command validateontology checks if an ontology is normalized and if any comments documenting prior merge conflicts are present. To validate an ontology enter:

vitalsigns validateontology -o | --ont <input ontology>

where input ontology is the file name or path to an input ontology.

verify

gitenable

To enable vitalsigns git functionality enter:

vitalsigns gitenable

When vitalsigns git functionality is enabled, vitalsigns will automatically add changes in the working directory (made by running vitalsigns commands) to the staging area. You will still need to commit these changes.

gitdisenable

To disable vitalsigns git functionality enter:

vitalsigns gitenable

When vitalsigns git functionality is disabled, vitalsigns will no longer add changes in the working directory to the staging area.

upversion

To increase the patch version of the current ontology enter:

vitalsigns upversion -o | --ont <input ontology>

where input ontology is either the file name or path to a domain owl in $VITAL_HOME/domain-ontology/.

When upversion executes, the current version of the ontology is archived and a new version of the ontology is generated.

If git is enabled, all changes are added to the staging area.

Note: upversion only modifies the patch number. To modify the major or minor version numbers, the ontology must be manually edited.

downversion

To decrease the patch version of the current ontology enter one of the following:

vitalsigns downversion -o | --ont <input ontology>
vitalsigns downversion -o | --ont <input ontology> -v | --version <version number>

where input ontology is either the file name or path to a domain owl in $VITAL_HOME/domain-ontology/ and version number is the optional version to be reverted to (n.n.n). If a version is not specified, vitalsigns will use the previous version contained in the archive.

When downversion executes, it archives the current version, and generates a new higher-numbered version with the content of the older version.

If git is enabled, all changes are added to the staging area.

Note: downversion only modifies the patch number. To modify the major or minor version numbers, the ontology must be manually edited.

checkin

To copy an external ontology into $VITAL_HOME/domain-ontology/, enter:

vitalsigns checkin -o | --ont <external ontology>

where external ontology is the path to an ontology which must not be located in $VITAL_HOME.

If git is enabled, the checked in ontology will be added to the staging area.

purge

To remove an ontology including all archived versions of an ontology as well as any domain jars or JSON schema generated from an ontology enter:

vitalsigns purge -a | --app <prefix>

where prefix is the owl and jar files’ prefix.

gitpostmerge

mergestatus

mergetool

gitjarmerge

4.Managed Properties and Classes #

VitalSigns turns classes and properties defined in OWL into data objects available in the JVM.

Given a class hierarchy of animals, with set of properties in a hierarchy, we can do:

def cat = new Cat().generateURI()
def dog = new Dog().generateURI()
def animal = new Animal().generateURI()
def mammal = new Mammal().generateURI()
// super/sub classes
assert cat.isSubTypeOf(Animal.class) == true
assert cat.isSubTypeOf(Dog.class) == false
assert animal.isSuperTypeOf(Cat.class) == true
// super/sub properties
cat.length = 18.0
// property types
// traits
mammal = mammal.addType(Cat.class)
assert mammal.isaType(cat) == true
mammal = mammal.removeType(cat)
assert mammal.isaType(cat) == false

5.Unmanaged Properties and Classes #

A base Vital data object may be used to represent an arbitrary instance of a class and set of properties.

def instance = new VITAL_Node()
instance.type = ["http://xmlns.com/foaf/0.1/Person"]
VitalSigns.addExternalNamespace("foaf", "http://xmlns.com/foaf/0.1/")
instance."foaf:name" = "Marc Hadfield"^xsd.string
instance."foaf:homepage" = "http://www.hadfield.org"^xsd.string
println instance.toRDF()

6.The Vital Service API #

6.1.Two available interfaces #

  • Vital Service API: Primary interface, used within an “App”

  • Vital Service Admin API: Used for administrative functions, allows using “App” as a parameter

6.2.Common Objects #

  • GraphObject

  • Organization

  • App

  • Segment

  • EndpointType

  • VitalStatus

  • ResultList

  • URIProperty

  • List: form of a “container” of GraphObjects, which may be iterated over

  • Transaction

  • ServiceOperation: abstraction for a modification of data, including: insert, update, delete, import, export

  • VitalQuery: a query object; one of: VitalSelectQuery, VitalGraphQuery, VitalPathQuery

  • VitalPathQuery

  • VITAL_Event

6.3.Vital Service API Calls #

  • EndpointType getEndpointType()

  • Organization getOrganization()

  • App getApp()

  • String getDefaultSegmentName()

  • void setDefaultSegmentName(String defaultsegment)

  • VitalStatus validate()

  • VitalStatus ping()

  • VitalStatus close()

  • List listSegments()

  • URIProperty generateURI(Class<? extends GraphObject> clazz)

  • Transaction createTransaction()

  • VitalStatus commitTransaction(Transaction transaction)

  • VitalStatus rollbackTransaction(Transaction transaction)

  • List getTransactions()

  • VitalStatus setTransaction(Transaction transaction)

  • ResultList get(GraphContext graphContext, URIProperty uri)

  • ResultList get(GraphContext graphContext, URIProperty uri, boolean cache)

  • ResultList get(GraphContext graphContext, List uris)

  • ResultList get(GraphContext graphContext, List uris, boolean cache)

  • ResultList get(GraphContext graphContext, URIProperty uri, List containers)

  • ResultList get(GraphContext graphContext, List uris, List containers)

  • VitalStatus delete(URIProperty uri)

  • VitalStatus delete(List uris)

  • VitalStatus deleteObject(GraphObject object)

  • VitalStatus deleteObjects(List objects)

  • ResultList insert(VitalSegment targetSegment, GraphObject graphObject)

  • ResultList insert(VitalSegment targetSegment, List graphObjectsList)

  • ResultList save(VitalSegment targetSegment, GraphObject graphObject, boolean create)

  • ResultList save(VitalSegment targetSegment, List graphObjectsList, boolean create)

  • ResultList save(GraphObject graphObject)

  • ResultList save(List graphObjectsList)

  • VitalStatus doOperations(ServiceOperations operations)

  • ResultList callFunction(String function, Map arguments)

  • ResultList query(VitalQuery query)

  • ResultList queryLocal(VitalQuery query)

  • ResultList queryContainers(VitalQuery query, List containers)

  • ResultList getExpanded(URIProperty uri, boolean cache)

  • ResultList getExpanded(URIProperty uri, VitalPathQuery query, boolean cache)

  • VitalStatus deleteExpanded(URIProperty uri)

  • VitalStatus deleteExpandedObject(GraphObject object)

  • VitalStatus deleteExpanded(URIProperty uri, VitalPathQuery query)

  • VitalStatus deleteExpanded(List uris, VitalPathQuery query)

  • VitalStatus deleteExpandedObjects(List objects, VitalPathQuery query)

  • VitalStatus bulkImport(VitalSegment segment, InputStream inputStream)

  • VitalStatus bulkExport(VitalSegment segment, OutputStream outputStream)

  • VitalStatus sendEvent(VITAL_Event event, boolean waitForDelivery)

  • VitalStatus sendEvents(List events, boolean waitForDelivery)

  • VitalStatus uploadFile(URIProperty uri, String fileName, InputStream inputStream, boolean overwrite)

  • VitalStatus downloadFile(URIProperty uri, String fileName, OutputStream outputStream, boolean closeOutputStream)

  • VitalStatus fileExists(URIProperty uri, String fileName)

  • VitalStatus deleteFile(URIProperty uri, String fileName)

  • ResultList listFiles(String filepath)

6.4.Vital Service Admin API Calls #

  • EndpointType getEndpointType()

  • Organization getOrganization()

  • VitalStatus validate()

  • VitalStatus ping()

  • VitalStatus close()

  • List listSegments(App app)

  • VitalSegment addSegment(App app, VitalSegment config, boolean createIfNotExists)

  • VitalStatus removeSegment(App app, VitalSegment segment, boolean deleteData)

  • List listApps()

  • VitalStatus addApp(App app)

  • VitalStatus removeApp(App app)

  • URIProperty generateURI(App app, Class<? extends GraphObject> clazz)

  • Transaction createTransaction()

  • VitalStatus commitTransaction(Transaction transaction)

  • VitalStatus rollbackTransaction(Transaction transaction)

  • List getTransactions()

  • VitalStatus setTransaction(Transaction transaction)

  • ResultList get(App app, GraphContext graphContext, URIProperty uri)

  • ResultList get(App app, GraphContext graphContext, URIProperty uri, boolean cache)

  • ResultList get(App app, GraphContext graphContext, List uris)

  • ResultList get(App app, GraphContext graphContext, List uris, boolean cache)

  • ResultList get(App app, GraphContext graphContext, URIProperty uri, List containers)

  • ResultList get(App app, GraphContext graphContext, List uris, List containers)

  • VitalStatus delete(App app, URIProperty uri)

  • VitalStatus delete(App app, List uris)

  • VitalStatus deleteObject(App app, GraphObject object)

  • VitalStatus deleteObjects(App app, List objects)

  • ResultList insert(App app, VitalSegment targetSegment, GraphObject graphObject)

  • ResultList insert(App app, VitalSegment targetSegment, List graphObjectsList)

  • ResultList save(App app, VitalSegment targetSegment, GraphObject graphObject, boolean create)

  • ResultList save(App app, VitalSegment targetSegment, List graphObjectsList, boolean create)

  • ResultList save(App app, GraphObject graphObject)

  • ResultList save(App app, List graphObjectsList)

  • VitalStatus doOperations(App app, ServiceOperations operations)

  • ResultList callFunction(App app, String function, Map arguments)

  • ResultList query(App app, VitalQuery query)

  • ResultList queryLocal(App app, VitalQuery query)

  • ResultList queryContainers(App app, VitalQuery query, List containers)

  • ResultList getExpanded(App app, URIProperty uri, boolean cache)

  • ResultList getExpanded(App app, URIProperty uri, VitalPathQuery query, boolean cache)

  • VitalStatus deleteExpanded(App app, URIProperty uri)

  • VitalStatus deleteExpandedObject(App app, GraphObject object)

  • VitalStatus deleteExpanded(App app, URIProperty uri, VitalPathQuery query)

  • VitalStatus deleteExpanded(App app, List uris, VitalPathQuery query)

  • VitalStatus deleteExpandedObjects(App app, List objects, VitalPathQuery query)

  • VitalStatus bulkImport(App app, VitalSegment segment, InputStream inputStream)

  • VitalStatus bulkExport(App app, VitalSegment segment, OutputStream outputStream)

  • VitalStatus sendEvent(App app, VITAL_Event event, boolean waitForDelivery)

  • VitalStatus sendEvents(App app, List events, boolean waitForDelivery)

  • VitalStatus uploadFile(App app, URIProperty uri, String fileName, InputStream inputStream, boolean overwrite)

  • VitalStatus downloadFile(App app, URIProperty uri, String fileName, OutputStream outputStream, boolean closeOutputStream)

  • VitalStatus fileExists(App app, URIProperty uri, String fileName)

  • VitalStatus deleteFile(App app, URIProperty uri, String fileName)

  • ResultList listFiles(App app, String filepath)

7.VitalService Query Builder #

Queries can be executed using the VitalService API, with results returned. The queries run on the configured VitalService endpoint, such as Vital Prime, an RDF Triplestore, DynamoDB, or other database.

The most convenient way to compose a query is using the Query Builder framework.

This page describes using the Builder to compose and execute a query, and process the results.

7.1.Concepts #

  • Graph

A graph is a collection of data which is composed of objects (“nodes” or “vertices”) which are connected by links (“edges”).

  • Graph Objects

VitalService defines four major types of graph objects: Nodes, Edges, HyperNodes, and HyperEdges. Edges connect a node with another node and are directed: node1 –edge-→ node2. A hyperedge may connect any two elements and is also directed: node1—​hyperdge-→edge1 or node1—​hyperedge-→hypernode. HyperNodes and HyperEdges are used to represent “meta” information, such as an annotation on another object. Most data is represented with nodes and edges.

  • URIs

A URI ( Uniform resource identifier ) is an identifier that is globally unique. An example is: http://vital.ai/ontology/vital/Person/123

  • Properties

A graph object may contain any number of properties consisting of name/value pairs. Each property has a datatype, such as “String”, “Date”, or “Integer”. An example would be a string property with name “hasName” with the value of “John”.

  • Classes

A Graph Object belongs to a class (or more than one class) which defines its meaning. An example would be “Person” which would be a class to represent all Graph Objects that refer to a Person. A class has associated properties, such as a Person having a “name” and a “birthday”.

  • Graph Query

A graph query is a query to find a subset of a graph that means specific criteria. For instance, if you have a graph of Person objects linked to Email objects, connected by “Sent” and “Received” edges, you could query for the graph of all messages in the last 10 days, which would return results such as:

Person1 –Sent-→ Message1 –Received-→ Person2

You could further constraint the query with additional criteria, such as querying for all Messages received on a Person’s birthday.

  • ARC

An ARC is part of a graph query which represents hopping from one Graph Object to the next by transitioning over an Edge (or HyperEdge). In the previous example of Email Messages, there were two hops: from Person1 to Message1, and from Message1 to Person2.

This more formally is represented by:

ARC {
node_constraint { Person }
ARC {
edge_constraint{ Edge_Sent }
node_constraint { Message }
ARC {
node_constraint { Person }
edge_constraint { Edge_Received }
}
}
}
  • Provided Variables

An ARC may “provide” a value which may be constrained in another part of the query. Continuing the previous example, the “birthday” value from Person2 can be provided, and this provides a constraint on the timestamp of the Email messages, and we only match Email messages sent on the receivers birthday.

ARC {
node_constraint { Person }
ARC {
edge_constraint{ Edge_Sent }
node_constraint { Message }
node_provides { "tstamp = timestamp" }
constraint { "?tstamp = ?bday" }
ARC {
node_constraint { Person }
node_provides { "bday = birthday" }
edge_constraint { Edge_Received }
}
}
}
  • Bound Names

In our results, we may wish to associated part of the graph with a name, so that we can easily interpret the results. Adding this to the example yields:

ARC {
node_constraint { Person }
node_bind { "person-uri" }
ARC {
edge_constraint{ Edge_Sent }
edge_bind { "sent-edge-uri" }
node_constraint { Message }
node_provides { "tstamp = timestamp" }
node_bind { "message-uri" }
constraint { "?tstamp = ?bday" }
ARC {
node_constraint { Person }
node_provides { "bday = birthday" }
node_bind { "birthday-person-uri" }
edge_constraint { Edge_Received }
edge_bind { "received-edge-uri" }
}
}
}
 

7.2.Basic Structure #

Queries are composed directly in code using query structures that resemble query languages such as SQL and SPARQL.

By being directly in code, the elements of the query can be checked for syntax and proper type, catching many mistakes and allowing quick query composition. Additionally, elements of the query can be auto-suggested in your development IDE, also speeding query composition.

As example, the constraint:

constraint { Person.props().name.equalTo("John" ) }

produces a query constraint requiring that the “name” property is equal to the value “John”.

The “name” property can be auto-suggested as it’s associated with the “Person” class. Since “name” is a String property, the comparators for Strings can be auto-suggested, such as equalTo or “contains”.

7.3.Property Types #

  • Boolean

  • Date

  • Double

  • Float

  • GeoLocation: a longitude and latitude pair

  • Integer

  • Long

  • String

  • URI

  • MultiValue: contains a set of values

  • Other: generic container of a value

7.3.1.Comparators #

  • equalTo: has the identical value

  • notEqualTo: does not have the identical value

  • lessThan

  • lessThanEqualTo

  • greaterThan

  • greaterThanEqualTo

  • contains: contains this string

  • contains: contains this value for a multi-value property

  • contains_i: contains this string, ignoring the case

  • notContains: does not contain this string

  • notContains: does not contain this value for a multi-value property

  • notContains_i: does not contain this string, ignoring case

  • exists: tests if a value for the named property exists

  • notExists: tests if a value for the named property does not exist

  • before: used with Dates, synonym for lessThan

  • after: used with Dates, synonym for greaterThan

  • GeoLocation comparators to be added

  • oneOf: used with a list of items, equivalent to an OR of equalTo comparators

  • noneOf: used with a list of items, equivalent to an AND of notEqualTo comparators

7.4.Parameters: Value Statements #

  • value

Parameters can be either single valued or multi-valued. For single values parameters like “limit”, if it is declared more than once, then the last value declared is taken. For multi-valued parameters, additional declarations add to the set of values.

7.5.Available Parameters #

  • segment: used to specify the data segments the query is applied to

  • limit: used to limit the number of results returned

  • offset: used to skip an initial offset of results

  • direction: forward|reverse – used to specify if the arc is to be treated “source” -→ “destination” or the reverse.

  • parent: removing ambiguity in hyperarcs which element the parent is (connector or target)

  • target: removing ambiguity in hyperarcs what type the target is (node, edge, hyperedge, hypernode)

  • optional: use to specify that this ARC/HYPER_ARC is optional

7.6.Constraints #

  • Property Constraints

  • Type Constraints

  • Provided Variable Constraints: constrains a named variable using a value or other named variable

7.7.Provides Statements #

  • provides: associates a property with a named variable, which may be constrained in parent containers

  • node_provides

  • edge_provides

  • hypernode_provides

  • hyperedge_provides

 

7.8.Constraint Containers #

  • constraint

  • node_constraint

  • edge_constraint

  • hypernode_constraint

  • hyperedge_constraint

7.9.Boolean Containers #

  • AND

  • OR

7.10.Top Level Containers #

  • SELECT

  • GRAPH

7.11.ARC Containers #

  • ARC

  • HYPER_ARC

  • ARC_AND

  • ARC_OR

  • HYPERARC_AND

  • HYPERARC_OR

7.12.Constraint Context #

  • target: removes ambiguity in HYPER ARCS to indicate that the enclosed provides/constraint applies to the target

  • connector: removes ambiguity in HYPER ARCS to indicate that the enclosed provides/constraint applies to the connector

  • source: reserved to refer to the enclosed source

7.13.Binding Names to Results #

  • node_bind { “name” }: binds “name” to the URI of the node of the ARC

  • edge_bind { “name” }: binds “name” to the URI of the edge of the ARC

  • connector { bind { “name” } }: binds “name” to the URI of the connector of the ARC/HYPER_ARC

  • target { bind { “name” } }: bind “name” to the URI of the target of the ARC/HYPER_ARC

7.14.Select Queries #

Example:

SELECT {
value limit: 100
        value offset: 0
        value segments: ["mydata"]
constraint { Person.class }
constraint { Person.props().name.equalTo("John" ) }
}

7.15.Results #

Results are returned in a ResultList containing GraphObjects.

7.16.Graph Queries #

Example:

Given a set of Email messages with links to senders and receivers, find all messages sent by “john@example.org“, excluding those he sent to himself.

GRAPH {
    value segments: ["mydata"]
ARC {
          node_constraint { Email.class }
          constraint { "?person1 != ?person2" }
          ARC_AND {
              ARC {
                edge_constraint { Edge_hasSender.class }
                node_constraint {Person.props().emailAddress.equalTo("john@example.org")
                node_constraint { Person.class }
                node_provides { "person1 = URI" }
             }
              ARC {
                edge_constraint { Edge_hasRecipient.class }
                node_constraint { Person.class }
                node_provides { "person2 = URI" }
             }
          }
        }
    }

7.17.Results #

Results are returned in a ResultList containing GraphMatch objects, each of which contains a set of URIs of the matching graph elements.

For the above example, each result includes:

  • URI of an email message

  • URI of the edge connecting the email to a sender

  • URI of the sender, which would the the URI of the Person with email addresss “john@example.org

  • URI of the edge connecting the email to a recipient

  • URI of the recipient Person, which is enforced to be not the same as the sender

7.18.Aggregation Queries #

DISTINCT, COUNT, SUM, AVERAGE, FIRST, LAST, COUNT+DISTINCT, FIRST+DISTINCT, LAST+DISTINCT

7.19.Native Queries #

A query specific to the endpoint implementation can be passed through directly. This includes Hive-SQL (Spark-SQL) and SPARQL queries. This is used principally for legacy or highly optimized queries. The results returned are objects containing the raw results.

7.20.Builder in Groovy, Java, Scala, and Spark #

The VitalService query builder uses Groovy closures to compose queries. In groovy source code, these are parsed directly.

In Java and Scala code, the same identical queries can be used, but passed to the builder as a String. The same should be true for any other JVM language.

In Scala (and Spark), this looks like:

    val email = "someone@example.org"
    def gQuery = builder.queryString (
        s"""
        GRAPH {
        value segments: ["mydata"]
        ARC {
           node_constraint { Email.class }
           ARC {
              node_constraint { Person.props().emailAddress.equalTo("${email}") }
           }
         }
        }
        """
       ).toQuery()

Note the query is surrounded by the multiline indicator (“””) and is preceded by “s” which indicates string substitution of variables — in this case the variable “email”. Thus, even as a string, the query can be a template with values passed in as needed.

7.21.Importing Domain Classes in Java, Scala, and Spark #

In a constraint such as:

node_constraint { Email.class }

The domain class of “Email” must be resolved to an implementation class. In Groovy code, these classes are found via the import statements of the enclosing class. When the query is passed in as a String, import statements must be included to allow the resolution of these classes.

If our domain classes are found within the package: com.mycompany.domain.*, then this import should be added into the query:

       def gQuery = builder.queryString (
         s"""
         // import statement at the head of the query
         import com.mycompany.domain.*
         // alternatively, individual imports can be stated, such as:
         // import com.mycompany.domain.Person
         // import com.mycompany.domain.Email
         GRAPH {
         value segments: ["mydata"]
         ARC {
            node_constraint { Email.class }
            ARC {
               node_constraint { Person.props().emailAddress.equalTo("${email}") }
            }
          }
         }
        """
       ).toQuery()

This allows the class “Email” and class “Person” to be found within the package “com.mycompany.domain” and resolved to: com.mycompany.domain.Email and com.mycompany.domain.Person.

7.22.Supported Endpoints #

  • Triplestores

    • Allegrograph

  • SOLR/Lucene

  • Amazon DynamoDB

  • Spark

  • MongoDB (In progress)

8.Vital Service Operations Builder #

The builder can be used to build sets of operations to process by VitalService. These operations are: INSERT, UPDATE, DELETE

  • INSERT

  • UPDATE

  • DELETE

9.VitalSigns Property Restriction Annotations #

9.1.GraphObject Validation #

A GraphObject may be validated by calling:

graphobject.validate()

This returns an instance of VitalStatus which includes the status: OK or INVALID

If INVALID, the status object includes a map of fields to errors.

The VitalSigns config file may contain an entry for:

enforceConstraints = true

If set to “true”, the constraints will be enforced when property values are set, and will throw an exception if a value is invalid.

 

9.2.Annotations #

  • hasMaxValueExclusive

  • hasMaxValueInclusive

  • hasMinValueExclusive

  • hasMinValueInclusive

values: literal or an individual of type vital-core:RestrictionAnnotationValue

given a class Person and a property hasAge, to define valid values of hasAge to be:

0 < hasAge <= 120,

create 2 individuals of vital-core:RestrictionAnnotationValue:

ind1   hasRestrictionValue = 0
ind2   hasRestrictionValue = 120

annotate both of these with:

vital-core:hasRestrictionClasses=Person

then you annotate hasAge property with

vital-core:hasMinValueExclusive = <ind1>
vital-core:hasMaxValueInclusive = <ind2>

10.Vital AI Platform Overview #

The Vital AI Platform addresses one of the most problematic issues with using Big Data and Semantic Software — all the developer time is used in simply integrating and maintaining the components, and no time is left to truly make use of the technology to provide additional value.

The Vital AI Platform integrates Big Data and Semantic components together into a seamless framework allowing these components to be used “out of the box” and freeing up developer time to focus on what’s truly important in an application — providing value to the user.

Modules supporting different types of data analysis such as machine learning, natural language processing, and rule engines can be added into the application at any time without disrupting the application architecture or injecting new scaling problems.

The Vital AI Platform is divided into three primary layers:

  • VitalPrime

  • VitalFlows

  • Hadoop

An application typically interacts with the Vital AI Platform using a REST or Queue API. Client APIs are used within the application to access the platform. Client APIs are available in Groovy and Java, and other languages are easy to add.

The VitalPrime Server provides data and functionality requires by the application in “real time”. VitalPrime provides access to an in-memory cache and various index and data repositories, including MySQL, Amazon DynamoDB, Voldemort, Allegrograph, 4Store, and HBase. The cache includes real-time activity data such as “views” on content or other “click” data. VitalPrime also provides a stored procedure mechanism using scripts called DataScripts for quick calculations — thus “trend” information could be calculated using data within the activity cache.

VitalFlows provide data analysis in “near real time”. VitalFlows use work queues and workers to process data in a scalable way. A VitalFlow could be used such cases as (1) processing content to extract entity names, (2) processing a “Movie” object to categorize it into a genre, (3) analyzing part of a social graph to find important influencers, or (4) interacting with an external API such as Facebook to acquire data. VitalFlows can be used in “real time” but it’s best to use them when the data will be stored, cached, and presented to users at a later time.

Hadoop is for long term storage of data and for offline / batch data analysis. As example, Hadoop could be used to store all “Like” activity in an application. This data could be used by machine learning techniques to produce a recommendation model to recommend content to users. This model can then be used in “real time” in VitalPrime or in a VitalFlow to make ongoing recommendations, with a new model produced in Hadoop every day in a batch process.

Throughout the platform a common data model is used, easing data integration issues. The data bindings are created with a Vital AI Platform component called VitalSigns.

11.Hello World Example #

Simple example use of the Vital AI Client.

The goal of this example will be to process text with some basic Natural Language Processing steps, and print out the results.

(For those not in the know, see: http://en.wikipedia.org/wiki/Hello_world_program)

The steps involved are:

  1. Add a new class to the data model for your new entity

  2. Run VitalSigns to create data model bindings

  3. Add a simple dictionary-based NLP extraction rule to the NLP processing step to recognize your new entity

  4. Deploy the Vital AI Platform including the NLP Processor using the new extraction rule

  5. Start a new project in your favorite IDE

  6. Add the Vital AI Client libraries to your project

  7. Add a new “main” function which calls the Vital Service, passes in some text, and prints out the results

  8. Done!

A detailed description follows:

11.1.Add a new class to the data model for your new entity #

Use an ontology editor such as Protege to create a new OWL ontology extending the Vital base OWL ontology. Using the editor, add a new class called “BrandEntity” extending the base “Node” class.

Using Protege to edit the Vital ontology: