Get dataset list using C#

Using sync call on async method in console line application using Slamby .Net SDK v0.17.0.

.Net SDK is available in Nuget Galery.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using Slamby.SDK.Net.Managers;
using Slamby.SDK.Net;
using Slamby.SDK.Net.Models;

namespace ConsoleApplication2
{
    class Program
    {
        static void Main(string[] args)
        {
            var configuration = new Configuration
            {
                ApiBaseEndpoint = new Uri("https://europe.slamby.com/demo/"),
                ApiSecret = "s3cr3t"
            };
            var manager = new DataSetManager(configuration);
            var response = manager.GetDataSetsAsync().GetAwaiter().GetResult();
            foreach (var item in response.ResponseObject)
            {
                Console.WriteLine(item.Name);
            }
            Console.ReadLine();
        }
    }
}

Add and modify document using Python

API version: 0.17.0

In this example let’s check how to modify a document in a dataset. At first we create a document, then using the given document id, update it.

import slamby_sdk
from slamby_sdk.rest import ApiException
import uuid

client = slamby_sdk.ApiClient("https://europe.slamby.com/demo/")
client.set_default_header("Authorization", "Slamby s3cr3t")
client.set_default_header("X-DataSet", "demo")

document = {
              "id":str(uuid.uuid4()),
              "title":"demo",
              "desc":"description",
              "tags":[]
            }

# Create document
try:
    slamby_sdk.DocumentApi(client).create_document(document=document)
except ApiException as e:
    print(e)

# Modify it
document["title"] = "modified title"

# Update it
try:
    slamby_sdk.DocumentApi(client).update_document(id=document["id"],document=document)
except ApiException as e:
    print(e)

Search for documents using Filter and Python

API version: 0.17.0

For search you can use our built-in filter function. In this demo you can examine how to use filter, and how to process the result.

import slamby_sdk
from slamby_sdk.rest import ApiException

client = slamby_sdk.ApiClient("https://europe.slamby.com/demo/")
client.set_default_header("Authorization", "Slamby s3cr3t")
client.set_default_header("X-DataSet", "demo")

filter = {
    "Filter" : {
        "TagIds" : [],
        "Query" : "demo"
    },
    "Pagination" : {
        "Offset" : 0,
        "Limit": 3,
        "OrderDirection" : "Asc",
        "OrderByField" : "title"
    }
}

try:
    result = slamby_sdk.DocumentApi(client).get_filtered_documents(filter_settings=filter)
    #print(result)
except ApiException as e:
    print(e)

# Print statistics
if result:
    total = result.total
    count = result.count
    found_documents = result.items

# Print document titles
if result:
    for item in result.items:
        print(item["title"])

Using Classifier Service for recommendation with Python

There is a demo classifier service on the demo server for category recommendation. Using the given demo service you can see an example python implementation.

import slamby_sdk
from slamby_sdk.rest import ApiException

client = slamby_sdk.ApiClient("https://europe.slamby.com/demo/")
client.set_default_header("Authorization", "Slamby s3cr3t")

request = {
    "Text": "Macbook Pro 13 inch with retina display",
    "Count": "2",
    "UseEmphasizing": False,
    "NeedTagInResult": True
}

try:
    result = slamby_sdk.ClassifierServiceApi(client).recommend_service(id="371c7e7a-8d28-4a47-82fe-8f1fdb0b228e",request=request)
    #print(result)
except ApiException as e:
    print(e)

Document import using Python SDK

Document import without bulk

Short example how to import data into a dataset. In this example we don’t use bulk import.

import slamby_sdk
from slamby_sdk.rest import ApiException
import uuid

client = slamby_sdk.ApiClient("https://europe.slamby.com/demo/")
client.set_default_header("Authorization", "Slamby s3cr3t")
client.set_default_header("X-DataSet", "demo")

document = {
              "id":str(uuid.uuid4()),
              "title":"demo",
              "desc":"description",
              "tags":[]
            }

try:
    slamby_sdk.DocumentApi(client).create_document(document=document)
except ApiException as e:
    print(e)

Bulk import without parallel processing

In this example you can see how to import data into your dataset using bulk import.

import slamby_sdk
from slamby_sdk.rest import ApiException
import uuid

client = slamby_sdk.ApiClient("https://europe.slamby.com/demo/")
client.set_default_header("Authorization", "Slamby s3cr3t")
client.set_default_header("X-DataSet", "demo")


documents = {
     "documents":
         [
            {
              "id":str(uuid.uuid4()),
              "title":"demo",
              "desc":"description",
              "tags":[]
            },
            {
              "id":str(uuid.uuid4()),
              "title":"demo",
              "desc":"description",
              "tags":[]
            },
            {
              "id":str(uuid.uuid4()),
              "title":"demo",
              "desc":"description",
              "tags":[]
            }
        ]
 }

try:
    slamby_sdk.DocumentApi(client).bulk_documents(settings=documents)
except ApiException as e:
    print(e)

Single import with parallel processing

In this example we combine single import process with parallel processing. For parallel processing we use Parallel from joblib.

You can even combine parallel processing with bulk import as well.

import slamby_sdk
from slamby_sdk.rest import ApiException
import csv
from joblib import Parallel, delayed

client = slamby_sdk.ApiClient("https://europe.slamby.com/demo/")
client.set_default_header("Authorization", "Slamby s3cr3t")
client.set_default_header("X-DataSet", "demo")

def addDocument(document):
    try:
        slamby_sdk.DocumentApi(client).create_document(document=document)
    except ApiException as e:
        print(e)

if __name__ == '__main__':
    with open('ads.csv', 'r') as csvfile:
        reader = csv.DictReader(csvfile)
        num_cores = 4
        Parallel(n_jobs=num_cores)(delayed(addDocument)(document) for document in reader)