Text Analysis API

          Detect abusive content, obtain sentiment analysis, extract entities, detect topics, automatically correct spelling errors, and more, in 27 languages.

Invoke text analysis

The method analyzes the input, returning high-level and low-level metadata in JSON.

The request body is a JSON structure made of three elements:

  • language (string) - a standard IETF tag for the language to analyze
  • content (string) - a content to analyze
  • settings (structure) - the settings to apply when analyzing

Example:

{"language": "en", "content":"Hello Tisane API!", "settings": {}}

Response Reference

The response contains several sections which are displayed or hidden according to the settings.

The common attributes are:

  • text (string) - the original input
  • reduced_output (boolean) - if the input is too big, and verbose information like the lexical chunk was requested, the verbose information will not be generated, and this flag will be set to true and returned as part of the response
  • sentiment (floating-point number) - a number in range -1 to 1 indicating the document-level sentiment. Only shown when document_sentiment setting is set to true.
  • signal2noise (floating-point number) - a signal to noise ranking of the text, in relation to the array of concepts specified in the relevant setting. Only shown when the relevant setting exists.

Abusive Content

The abuse section is an array of detected instances of content that may violate some terms of use. NOTE: the terms of use in online communities may vary, and so it is up to the administrators to determine whether the content is indeed abusive. For instance, it makes no sense to restrict sexual advances in a dating community, or censor profanities when it's accepted in the bulk of the community.

The section exists if instances of abuse are detected and the abuse setting is either omitted or set to true.

Every instance contains the following attributes:

  • offset (unsigned integer) - zero-based offset where the instance starts
  • length (unsigned integer) - length of the content
  • sentence_index (unsigned integer) - zero-based index of the sentence containing the instance
  • text (string) - fragment of text containing the instance (only included if the snippets setting is set to true)
  • tags (array of strings) - when exists, provides additional detail about the abuse. For instance, if the fragment is classified as an attempt to sell hard drugs, one of the tags will be hard_drug.
  • type (string) - the type of the abuse
  • severity (string) - how severe the abuse is. The levels of severity are low, medium, high, and extreme

The currently supported types are:

  • personal_attack - an insult / attack on the addressee, e.g. an instance of cyberbullying. Please note that an attack on a post or a point, or just negative sentiment is not the same as an insult. The line may be blurred at times. See our Knowledge Base for more information.
  • bigotry - hate speech aimed at one of the protected classes. The hate speech detected is not just racial slurs, but, generally, hostile statements aimed at the group as a whole
  • profanity - profane language, regardless of the intent
  • sexual_advances - welcome or unwelcome attempts to gain some sort of sexual favor or gratification
  • criminal_activity - attempts to sell or procure restricted items, criminal services, issuing death threats, and so on
  • external_contact - attempts to establish contact or payment via external means of communication, e.g. phone, email, instant messaging (may violate the rules in certain communities, e.g. gig economy portals, e-commerce portals)
  • spam - (RESERVED) spam content
  • generic - undefined

Sentiment Analysis

The sentiment_expressions section is an array of detected fragments indicating the attitude towards aspects or entities.

The section exists if sentiment is detected and the sentiment setting is either omitted or set to true.

Every instance contains the following attributes:

  • offset (unsigned integer) - zero-based offset where the instance starts
  • length (unsigned integer) - length of the content
  • sentence_index (unsigned integer) - zero-based index of the sentence containing the instance
  • text (string) - fragment of text containing the instance (only included if the snippets setting is set to true)
  • polarity (string) - whether the attitude is positive, negative, or mixed. Additionally, there is a default sentiment used for cases when the entire snippet has been pre-classified. For instance, if a review is split into two portions, What did you like? and What did you not like?, and the reviewer replies briefly, e.g. The quiet. The service, the utterance itself has no sentiment value. When the calling application is aware of the intended sentiment, the default sentiment simply provides the targets / aspects, which will be then added the sentiment externally.
  • targets (array of strings) - when available, provides set of aspects and/or entities which are the targets of the sentiment. For instance, when the utterance is, The breakfast was yummy but the staff is unfriendly, the targets for the two sentiment expressions are meal and staff. Named entities may also be targets of the sentiment.
  • reasons (array of strings) - when available, provides reasons for the sentiment. In the example utterance above (The breakfast was yummy but the staff is unfriendly), the reasons array for the staff is ["unfriendly"], while the reasons array for meal is ["tasty"].

Example:

"sentiment_expressions": [
        {
            "sentence_index": 0,
             "offset": 0,
             "length": 32,
             "polarity": "positive",
             "reasons": ["close"],
             "targets": ["location"]
         },
         {
            "sentence_index": 0,
             "offset": 38,
             "length": 29,
             "polarity": "negative",
             "reasons": ["disrespectful"],
             "targets": ["staff"]
         }
     ]

Entities

The entities_summary section is an array of named entity objects detected in the text.

The section exists if named entities are detected and the entities setting is either omitted or set to true.

Every entity contains the following attributes:

  • name (string) - the most complete name of the entity in the text of all the mentions
  • ref_lemma (string) - when available, the dictionary form of the entity in the reference language (English) regardless of the input language
  • type (string) - a string or an array of strings specifying the type of the entity, such as person, organization, numeric, amount_of_money, place. Certain entities, like countries, may have several types (because a country is both a place and an organization).
  • mentions (array of objects) - a set of instances where the entity was mentioned in the text

Every mention contains the following attributes:

  • offset (unsigned integer) - zero-based offset where the instance starts
  • length (unsigned integer) - length of the content
  • sentence_index (unsigned integer) - zero-based index of the sentence containing the instance
  • text (string) - fragment of text containing the instance (only included if the snippets setting is set to true)

Example:

 "entities_summary": [
        {
            "type": "person",
             "name": "John Smith",
             "ref_lemma": "John Smith",
             "mentions": [
                {
                    "sentence_index": 0,
                     "offset": 0,
                     "length": 10 }
             ]
         }
    ,
         {
            "type": [ "organization", "place" ]
        ,
             "name": "UK",
             "ref_lemma": "U.K.",
             "mentions": [
                {
                    "sentence_index": 0,
                     "offset": 40,
                     "length": 2 }
             ]
         }
     ]

Topics

The topics section is an array of topics (subjects, domains, themes in other terms) detected in the text.

The section exists if topics are detected and the topics setting is either omitted or set to true.

By default, a topic is a string. If topic_stats setting is set to true, then every entry in the array contains:

  • topic (string) - the topic itself
  • coverage (floating-point number) - a number between 0 and 1, indicating the ratio between the number of sentences where the topic is detected to the total number of sentences

Advanced Low-Level Data: Sentences, Phrases, and Words

Tisane allows obtaining more in-depth data, specifically:

  • sentences and their corrected form, if a misspelling was detected
  • lexical chunks and their grammatical and stylistic features
  • parse trees and phrases

The sentence_list section is generated if the words or the parses setting is set to true.

Every sentence structure in the list contains:

  • offset (unsigned integer) - zero-based offset where the sentence starts
  • length (unsigned integer) - length of the sentence
  • text (string) - the sentence itself
  • corrected_text (string) - if a misspelling was detected and the spellchecking is active, contains the automatically corrected text
  • words (array of structures) - if words setting is set to true, generates extended information about every lexical chunk. (The term "word" is used for the sake of simplicity, however, it may not be linguistically correct to equate lexical chunks with words.)
  • parse_tree (object) - if parses setting is set to true, generates information about the parse tree and the phrases detected in the sentence.
  • nbest_parses (array of parse objects) if parses setting is set to true and deterministic setting is set to false, generates information about the parse trees that were deemed close enough to the best one but not the best.

Words

Every lexical chunk ("word") structure in the words array contains:

  • type (string) - the type of the element: punctuation for punctuation marks, numeral for numerals, or word for everything else
  • text (string) - the text
  • offset (unsigned integer) - zero-based offset where the element starts
  • length (unsigned integer) - length of the element
  • corrected_text (string) - if a misspelling is detected, the corrected form
  • lettercase (string) - the original letter case: upper, capitalized, or mixed. If lowercase or no case, the attribute is omitted.
  • stopword (boolean) - determines whether the word is a stopword
  • grammar (array of strings or structures) - generates the list of grammar features associated with the word. If the feature_standard [setting] is defined as native, then every feature is an object containing a numeral (index) and a string (value). Otherwise, every feature is a plain string
Advanced

For lexical chunks only:

  • role (string) - semantic role, like agent or patient. Note that in passive voice, the semantic roles are reverse to the syntactic roles. E.g. in a sentence like The car was driven by David, car is the patient, and David is the agent.
  • numeric_value (floating-point number) - the numeric value, if the chunk has a value associated with it
  • family (integer number) - the ID of the family associated with the disambiguated word-sense of the lexical chunk
  • definition (string) - the definition of the family, if the fetch_definitions setting is set to true
  • lexeme (integer number) - the ID of the lexeme entry associated with the disambiguated word-sense of the lexical chunk
  • nondictionary_pattern (integer number) - the ID of a non-dictionary pattern that matched, if the word was not in the language model but was classified by the nondictionary heuristics
  • style (array of strings or structures) - generates the list of style features associated with the word. Only if the feature_standard [setting] is set to native or description
  • semantics (array of strings or structures) - generates the list of semantic features associated with the word. Only if the feature_standard [setting] is set to native or description
  • segmentation (structure) - generates info about the selected segmentation, if there are several possibilities to segment the current lexical chunk and the deterministic [setting] is set to false. A segmentation is simply an array of word structures.
  • other_segmentations (array of structures) - generates info about the segmentations deemed incorrect during the disambiguation process. Every entry has the same structure as the segmentation structure.
  • nbest_senses (array of structures) - when the deterministic [setting] is set to false, generates a set of hypotheses that were deemed incorrect by the disambiguation process. Every hypothesis contains the following attributes: grammar, style, and semantics, identical in structure to their counterparts above; and senses, an array of word-senses associated with every hypothesis. Every sense has a family, which is an ID of the associated family; and, if the fetch_definitions setting is set to true, definition and ref_lemma of that family.

For punctuation marks only:

  • id (integer number) - the ID of the punctuation mark
  • behavior (string) - the behavior code of the punctuation mark. Values: sentenceTerminator, genericComma, bracketStart, bracketEnd, scopeDelimiter, hyphen, quoteStart, quoteEnd, listComma (for East-Asian enumeration commas like )

Parse Trees and Phrases

Every parse tree, or more accurately, parse forest, is a collection of phrases, hierarchically linked to each other.

At the top level of the parse, there is an array of root phrases under the phrases element and the numeric id associated with it. Every phrase may have children phrases. Every phrase has the following attributes:

  • type (string) - a Penn treebank phrase tag denoting the type of the phrase, e.g. S, VP, NP, etc.
  • family (integer number) - an ID of the phrase family
  • offset (unsigned integer) - a zero-based offset where the phrase starts
  • length (unsigned integer) - the span of the phrase
  • role (string) - the semantic role of the phrase, if any, analogous to that of the words
  • text (string) - the phrase text, where the phrase members are delimited by the vertical bar character. Children phrases are enclosed in brackets. E.g., driven|by|David or (The|car)|was|(driven|by|David).

Example:

"parse_tree": {
"id": 4,
"phrases": [
{
        "type": "S",
        "family": 1451,
        "offset": 0,
        "length": 27,
        "text": "(The|car)|was|(driven|by|David)",
        "children": [
                {
                        "type": "NP",
                        "family": 1081,
                        "offset": 0,
                        "length": 7,
                        "text": "The|car",
                        "role": "patient"
                },
                {
                        "type": "VP",
                        "family": 1172,
                        "offset": 12,
                        "length": 15,
                        "text": "driven|by|David",
                        "role": "verb"
                }
        ]
}

Context-Aware Spelling Correction

Tisane supports automatic, context-aware spelling correction. Whether it's a misspelling or a purported obfuscation, Tisane attempts to deduce the intended meaning, if the language model does not recognize the word.

When or if it's found, Tisane adds the corrected_text attribute to the word (if the words / lexical chunks are returned) and the sentence (if the sentence text is generated).

Note that the invocation of spell-checking does not depend on whether the sentences and the words sections are generated in the output. The spellchecking can be disabled by setting disable_spellcheck to true.

Try it

Request URL

Request headers

(optional)
string
Media type of the body sent to the API.
string
Subscription key which provides access to this API. Found in your Profile.

Request body

A request is made of three elements:

  • language - a standard IETF tag for the language to analyze
  • content - a content to analyze
  • settings - the settings to apply when analyzing

Settings Reference

The purpose of the settings structure is to:

All settings are optional. To leave all settings to default, simply provide an empty object ({}).

Content Cues and Instructions

format (string) - the format of the content. Some policies will be applied depending on the format. Certain logic in the underlying language models may require the content to be of a certain format (e.g. logic applied on the reviews may seek for sentiment more aggressively). The default format is empty / undefined. The format values are:

  • review - a review of a product or a service or any other review. Normally, the underlying language models will seek for sentiment expressions more aggressively in reviews.
  • dialogue - a comment or a post which is a part of a dialogue. An example of a logic more specific to a dialogue is name calling. A single word like "idiot" would not be a personal attack in any other format, but it is certainly a personal attack when part of a dialogue.
  • shortpost - a microblogging post, e.g. a tweet.
  • longform - a long post or an article.
  • proofread - a post which was proofread. In the proofread posts, the spellchecking is switched off.

disable_spellcheck (boolean) - determines whether the automatic spellchecking is to be disabled. Default: false.

subscope (boolean) - enables sub-scope parsing, for scenarios like hashtag, URL parsing, and obfuscated content (e.g. ihateyou). Default: false.

domain_factors (set of pairs made of strings and numbers) - provides a session-scope cues for the domains of discourse. This is a powerful tool that allows tailoring the result based on the use case. The format is, family ID of the domain as a key and the multiplication factor as a value (e.g. "12345": 5.0). For example, when processing text looking for criminal activity, we may want to set domains relevant to drugs, firearms, crime, higher: "domain_factors": {"31058": 5.0, "45220": 5.0, "14112": 5.0, "14509": 3.0, "28309": 5.0, "43220": 5.0, "34581": 5.0}. The same device can be used to eliminate noise coming from domains we know are irrelevant by setting the factor to a value lower than 1.

when (date string, format YYYY-MM-DD) - indicates when the utterance was uttered. (TO BE IMPLEMENTED) The purpose is to prune word senses that were not available at a particular point in time. For example, the words troll, mail, and post had nothing to do with the Internet 300 years ago because there was no Internet, and so in a text that was written hundreds of years ago, we should ignore the word senses that emerged only recently.

Output Customization

abuse (boolean) - output instances of abusive content (default: true)

sentiment (boolean) - output sentiment-bearing snippets (default: true)

document_sentiment (boolean) - output document-level sentiment (default: false)

entities (boolean) - output entities (default: true)

topics (boolean) - output topics (default: true), with two more relevant settings:

  • topic_stats (boolean) - include coverage statistics in the topic output (default: false). When set, the topic is an object containing the attributes topic (string) and coverage (floating-point number). The coverage indicates a share of sentences touching the topic among all the sentences.
  • optimize_topics (boolean) - if true, the less specific topics are removed if they are parts of the more specific topics. For example, when the topic is cryptocurrency, the optimization removes finance.

words (boolean) - output the lexical chunks / words for every sentence (default: false). In languages without white spaces (Chinese, Japanese, Thai), the tokens are tokenized words. In languages with compounds (e.g. German, Dutch, Norwegian), the compounds are split.

fetch_definitions (boolean) - include definitions of the words in the output (default: false). Only relevant when the words setting is true

parses (boolean) - output parse forests of phrases

deterministic (boolean) - whether the n-best senses and n-best parses are to be output in addition to the detected sense. If true, only the detected sense will be output. Default: true

snippets (boolean) - include the text snippets in the abuse, sentiment, and entities sections (default: false)

Standards and Formats

feature_standard (string) - determines the standard used to output the features (grammar, style, semantics) in the response object. The standards we support are:

Only the native Tisane standards (codes and descriptions) support style and semantic features.

topic_standard (string) - determines the standard used to output the topics in the response object. The standards we support are:

  • iptc_code - IPTC topic taxonomy code
  • iptc_description - IPTC topic taxonomy description
  • iab_code - IAB topic taxonomy code
  • iab_description - IAB topic taxonomy description
  • native - Tisane domain description, coming from the family description (default)

sentiment_analysis_type (string) - (RESERVED) the type of the sentiment analysis strategy. The values are:

  • products_and_services - most common sentiment analysis of products and services
  • entity - sentiment analysis with entities as targets
  • creative_content_review - reviews of creative content
  • political_essay - political essays

Signal to Noise Ranking

When we're studying a bunch of posts commenting on an issue or an article, we may want to prioritise the ones more relevant to the topic, and containing more reason and logic than emotion. This is what the signal to noise ranking is meant to achieve.

The signal to noise ranking is made of two parts:

  1. Determine the most relevant concepts. This part may be omitted, depending on the use case scenario (e.g. we want to track posts most relevant to a particular set of issues).
  2. Rank the actual post in relevance to these concepts.

To determine the most relevant concepts, we need to analyze the headline or the article itself. The headline is usually enough. We need two additional settings:

  • keyword_features (an object of strings with string values) - determines the features to look for in a word. When such a feature is found, the family ID is added to the set of potentially relevant family IDs.
  • stop_hypernyms (an array of integers) - if a potentially relevant family ID has a hypernym listed in this setting, it will not be considered. For example, we extracted a set of nouns from the headline, but we may not be interested in abstractions or feelings. E.g. from a headline like Fear and Loathing in Las Vegas we want Las Vegas only. Optional.

If keyword_features is provided in the settings, the response will have a special attribute, relevant, containing a set of family IDs.

At the second stage, when ranking the actual posts or comments for relevance, this array is to be supplied among the settings. The ranking is boosted when the domain, the hypernyms, or the families related to those in the relevant array are mentioned, when negative and positive sentiment is linked to aspects, and penalized when the negativity is not linked to aspects, or abuse of any kind is found. The latter consideration may be disabled, e.g. when we are looking for specific criminal content. When the abuse_not_noise parameter is specified and set to true, the abuse is not penalized by the ranking calculations.

To sum it up, in order to calculate the signal to noise ranking:

  1. Analyze the headline with keyword_features and, optionally, stop_hypernyms in the settings. Obtain the relevant attribute.
  2. When analyzing the posts or the comments, specify the relevant attribute obtained in step 1.

{"language":"en", "content":"Babylonians should not be allowed at managerial positions.", "settings":{"parses":false}}

Response 200

Extract topics only:

{"language":"en", "content":"An inertial force is a force that resists a change in velocity of an object.", "settings":{}}

{
	"text": "An inertial force is a force that resists a change in velocity of an object.",
	"topics": [
		"physics"
	]
}

Code samples

@ECHO OFF

curl -v -X POST "https://api.tisane.ai/parse"
-H "Content-Type: application/json"
-H "Ocp-Apim-Subscription-Key: {subscription key}"

--data-ascii "{body}" 
using System;
using System.Net.Http.Headers;
using System.Text;
using System.Net.Http;
using System.Web;

namespace CSHttpClientSample
{
    static class Program
    {
        static void Main()
        {
            MakeRequest();
            Console.WriteLine("Hit ENTER to exit...");
            Console.ReadLine();
        }
        
        static async void MakeRequest()
        {
            var client = new HttpClient();
            var queryString = HttpUtility.ParseQueryString(string.Empty);

            // Request headers
            client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}");

            var uri = "https://api.tisane.ai/parse?" + queryString;

            HttpResponseMessage response;

            // Request body
            byte[] byteData = Encoding.UTF8.GetBytes("{body}");

            using (var content = new ByteArrayContent(byteData))
            {
               content.Headers.ContentType = new MediaTypeHeaderValue("< your content type, i.e. application/json >");
               response = await client.PostAsync(uri, content);
            }

        }
    }
}	
// // This sample uses the Apache HTTP client from HTTP Components (http://hc.apache.org/httpcomponents-client-ga/)
import java.net.URI;
import org.apache.http.HttpEntity;
import org.apache.http.HttpResponse;
import org.apache.http.client.HttpClient;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.client.utils.URIBuilder;
import org.apache.http.impl.client.HttpClients;
import org.apache.http.util.EntityUtils;

public class JavaSample 
{
    public static void main(String[] args) 
    {
        HttpClient httpclient = HttpClients.createDefault();

        try
        {
            URIBuilder builder = new URIBuilder("https://api.tisane.ai/parse");


            URI uri = builder.build();
            HttpPost request = new HttpPost(uri);
            request.setHeader("Content-Type", "application/json");
            request.setHeader("Ocp-Apim-Subscription-Key", "{subscription key}");


            // Request body
            StringEntity reqEntity = new StringEntity("{body}");
            request.setEntity(reqEntity);

            HttpResponse response = httpclient.execute(request);
            HttpEntity entity = response.getEntity();

            if (entity != null) 
            {
                System.out.println(EntityUtils.toString(entity));
            }
        }
        catch (Exception e)
        {
            System.out.println(e.getMessage());
        }
    }
}

<!DOCTYPE html>
<html>
<head>
    <title>JSSample</title>
    <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.9.0/jquery.min.js"></script>
</head>
<body>

<script type="text/javascript">
    $(function() {
        var params = {
            // Request parameters
        };
      
        $.ajax({
            url: "https://api.tisane.ai/parse?" + $.param(params),
            beforeSend: function(xhrObj){
                // Request headers
                xhrObj.setRequestHeader("Content-Type","application/json");
                xhrObj.setRequestHeader("Ocp-Apim-Subscription-Key","{subscription key}");
            },
            type: "POST",
            // Request body
            data: "{body}",
        })
        .done(function(data) {
            alert("success");
        })
        .fail(function() {
            alert("error");
        });
    });
</script>
</body>
</html>
#import <Foundation/Foundation.h>

int main(int argc, const char * argv[])
{
    NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
    
    NSString* path = @"https://api.tisane.ai/parse";
    NSArray* array = @[
                         // Request parameters
                         @"entities=true",
                      ];
    
    NSString* string = [array componentsJoinedByString:@"&"];
    path = [path stringByAppendingFormat:@"?%@", string];

    NSLog(@"%@", path);

    NSMutableURLRequest* _request = [NSMutableURLRequest requestWithURL:[NSURL URLWithString:path]];
    [_request setHTTPMethod:@"POST"];
    // Request headers
    [_request setValue:@"application/json" forHTTPHeaderField:@"Content-Type"];
    [_request setValue:@"{subscription key}" forHTTPHeaderField:@"Ocp-Apim-Subscription-Key"];
    // Request body
    [_request setHTTPBody:[@"{body}" dataUsingEncoding:NSUTF8StringEncoding]];
    
    NSURLResponse *response = nil;
    NSError *error = nil;
    NSData* _connectionData = [NSURLConnection sendSynchronousRequest:_request returningResponse:&response error:&error];

    if (nil != error)
    {
        NSLog(@"Error: %@", error);
    }
    else
    {
        NSError* error = nil;
        NSMutableDictionary* json = nil;
        NSString* dataString = [[NSString alloc] initWithData:_connectionData encoding:NSUTF8StringEncoding];
        NSLog(@"%@", dataString);
        
        if (nil != _connectionData)
        {
            json = [NSJSONSerialization JSONObjectWithData:_connectionData options:NSJSONReadingMutableContainers error:&error];
        }
        
        if (error || !json)
        {
            NSLog(@"Could not parse loaded json with error:%@", error);
        }
        
        NSLog(@"%@", json);
        _connectionData = nil;
    }
    
    [pool drain];

    return 0;
}
<?php
// This sample uses the Apache HTTP client from HTTP Components (http://hc.apache.org/httpcomponents-client-ga/)
require_once 'HTTP/Request2.php';

$request = new Http_Request2('https://api.tisane.ai/parse');
$url = $request->getUrl();

$headers = array(
    // Request headers
    'Content-Type' => 'application/json',
    'Ocp-Apim-Subscription-Key' => '{subscription key}',
);

$request->setHeader($headers);

$parameters = array(
    // Request parameters
);

$url->setQueryVariables($parameters);

$request->setMethod(HTTP_Request2::METHOD_POST);

// Request body
$request->setBody("{body}");

try
{
    $response = $request->send();
    echo $response->getBody();
}
catch (HttpException $ex)
{
    echo $ex;
}

?>
########### Python 2.7 #############
import httplib, urllib, base64

headers = {
    # Request headers
    'Content-Type': 'application/json',
    'Ocp-Apim-Subscription-Key': '{subscription key}',
}

params = urllib.urlencode({
})

try:
    conn = httplib.HTTPSConnection('api.tisane.ai')
    conn.request("POST", "/parse?%s" % params, "{body}", headers)
    response = conn.getresponse()
    data = response.read()
    print(data)
    conn.close()
except Exception as e:
    print("[Errno {0}] {1}".format(e.errno, e.strerror))

####################################

########### Python 3.2 #############
import http.client, urllib.request, urllib.parse, urllib.error, base64

headers = {
    # Request headers
    'Content-Type': 'application/json',
    'Ocp-Apim-Subscription-Key': '{subscription key}',
}

params = urllib.parse.urlencode({
})

try:
    conn = http.client.HTTPSConnection('api.tisane.ai')
    conn.request("POST", "/parse?%s" % params, "{body}", headers)
    response = conn.getresponse()
    data = response.read()
    print(data)
    conn.close()
except Exception as e:
    print("[Errno {0}] {1}".format(e.errno, e.strerror))

####################################
require 'net/http'

uri = URI('https://api.tisane.ai/parse')


request = Net::HTTP::Post.new(uri.request_uri)
# Request headers
request['Content-Type'] = 'application/json'
# Request headers
request['Ocp-Apim-Subscription-Key'] = '{subscription key}'
# Request body
request.body = "{body}"

response = Net::HTTP.start(uri.host, uri.port, :use_ssl => uri.scheme == 'https') do |http|
    http.request(request)
end

puts response.body