Introduction
I recently got access to IAN by Cyphae.
IAN, short for Intelligent Agent Network, is as the name implies a network of LLM agents with a meta-agentic analysis system on top that collates and provides a systematic analysis of the outputs of the agent network along with their individual responses. IAN is free to use and does not need one to be signed up to use it.
Ever since I started playing around with it, I've been thoroughly impressed by its ability to be more accurate than any one of the individual LLMs operating in isolation.
My theory is that this is due to the network effect of agents collaborating and correcting each other's shortcomings. If statistical mechanics and nuclear physics have taught us anything, it's that the whole is greater than the sum of the parts, and this may be true for AI systems as well.
IAN initializes with a default squad consisting of
- GPT-4o
- Perplexity
- Claude 3.5
While these are quite good at most tasks, they aren't as effective as the deck I usually ball with:
- LLaMa3.1-70b (or LLaMa3.1-405b when it works)
- Perplexity
- Claude 3.5
I've found that a lineup combining Claude 3.5's sophisticated creativity, Perplexity's retrieval augmented generation superpowers, and LLaMa's reasoning and text processing capabilities seems to work the best for my prompting style.
What can IAN do for me
As an example illustrating what IAN can do, a while back I was trying to improve my command over the concept of lifetimes in Rust, with the hopes of reducing how often I get my ass kicked by the borrow checker.
Being lazy and not wanting to spend time browsing resources and collating information myself when we are living in the age of language models, I prompted IAN to help me out with this.
I asked
Can you help me improve my understanding of how lifetimes work in Rust. I'd appreciate it if you could supplement your explanations with some examples and exercises for practice as well as important pointers to when it's necessary to use lifetimes.
IAN responded with
Understanding Lifetimes in Rust
Lifetimes in Rust are annotations that describe the scope of reference validity. They ensure memory safety by preventing dangling references and are a key part of Rust's ownership system.
Key Concepts
Reference Validity: Ensures references point to valid data within the scope.
Borrow Checking: Rust's compiler mechanism for safe references.
Compile-Time Feature: Checks lifetimes during compilation to ensure memory safety.
Elision: The compiler infers lifetimes in straightforward cases to simplify code.
Basic Syntax
fn example<'a>(x: &'a i32) -> &'a i32 {
x
}
Examples
Basic Lifetime Annotation:
fn longest<'a>(s1: &'a str, s2: &'a str) -> &'a str {
if s1.len() > s2.len() {
s1
} else {
s2
}
}
This function takes two string slices with the same lifetime and returns the longest one.
Structs with Lifetimes:
struct ImportantExcerpt<'a> {
part: &'a str,
}
fn main() {
let novel = String::from("Call me Ishmael. Some years ago...");
let first_sentence = novel.split('.').next().expect("Could not find a '.'");
let i = ImportantExcerpt { part: first_sentence };
}
The struct `ImportantExcerpt` holds a reference with a lifetime `'a`.
Exercises
Exercise 1: Write a function `smallest` that returns the smallest of two integer references with the same lifetime annotation.
fn smallest<'a>(x: &'a i32, y: &'a i32) -> &'a i32 {
if x < y {
x
} else {
y
}
}
Exercise 2: Define a struct `Book` with a title and author as string slices. Write a function that takes a slice of books and returns the book with the longest title.
struct Book<'a> {
title: &'a str,
author: &'a str,
}
fn book_with_longest_title<'a>(books: &'a [Book<'a>]) -> &'a Book<'a> {
let mut longest_book = &books[0];
for book in books.iter() {
if book.title.len() > longest_book.title.len() {
longest_book = book;
}
}
longest_book
}
When to Use Lifetimes
When writing functions that take references as parameters and/or return references.
When your structs hold references to prevent dangling references.
When dealing with complex scope and reference relationships to satisfy the borrow checker.
Informational Differences and Inconsistencies
Key Concept Emphasis:
GPT-4o and Perplexity emphasize the role of lifetimes in preventing dangling references.
Claude 3.5 focuses on the compiler's role and the implicit nature of lifetimes.
Likely Correct: GPT-4o and Perplexity, as preventing dangling references is central to Rust's ownership system.
Agent Strengths and Weaknesses
GPT-4o:
Strength: Provides detailed explanations and examples.
Weakness: Can be verbose and slightly complex for beginners.
Perplexity:
Strength: Provides a detailed and comprehensive explanation.
Weakness: Lacks some practical exercises to reinforce concepts.
Claude 3.5:
Strength: Simplicity and clear examples.
Weakness: Less detailed explanation on the importance of lifetimes.
Apart from answering the actual prompt itself and providing working code, IAN also provides a comparative analysis of "Informational Differences and Inconsistencies" and "Agent Strengths and Weaknesses", distilling each LLM agent's strengths and weaknesses in the context of the differences between their responses.
To me, this is what sets it apart as an LLM-powered service. You're not just prompting one LLM. You're prompting three. And you're not just going to be getting their responses. You're also going to be getting an analysis of how well they performed in comparison to each other. This is a big win for smarter and fairer AI systems.
What can I do for IAN?
Given IAN's increased precision and accuracy, which tends to lower the odds of hallucinations, I've been using it to help me understand codebases and generate code snippets. I craft prompts containing instructions and source files strategically interwoven together to maximize the context I'm able to give, and these days LLMs have context windows that are massive enough that they can process hundreds to thousands of lines with ease so I don't have to be particularly worried about prompt size.
I've also been wanting to get IAN in the CLI. Like a true turtle, I spend most of my time in a shell, and having the ability to prompt and access IAN from the CLI would be quite convenient. It makes it far easier for specific tasks such as when providing source files and code bases as context to IAN without needing to tediously paste them into the prompt by hand.
All of this led me to wonder about the possibility of building applications on top of IAN. I realized this would require such applications to be able to interact with IAN programmatically. However, I know IAN is still in beta and does not have a full-featured web UI yet, so an API client is a hard ask.
A quick search on GitHub and the interweb didn't give me any relevant results for any preexisting solutions, but we cannot do without clients if we are to build applications. IAN's web UI exists, and technically it is a client. What if we could potentially reverse engineer it to hack together an API interface specification that we can use to build our clients?
My mind couldn't stop thinking about the immense potential and possibilities, so it was only a matter of time before I arrived at the thought of "Hey, this is a really useful thing and I want to use it in more places. I think it's worth a little hack. What's to lose?"
Besides, I have IAN with me. With a general idea of what I want to do and how to proceed at every step, I don't need to spend more time than necessary figuring out implementation details when I can accelerate significantly by taking IAN's help with figuring those out.
This will be an odyssey in prompt crafting and LLM-powered reverse engineering in order to create ianOS (yes, I came up with it), a suite of clients and applications designed to be used with IAN.
Hacking the web UI
To begin, we must first ask the question of what IAN even is as an application.
IAN's interface at cyphae.com is a web UI, and any web UI, let it be IAN or ChatGPT or Perplexity, makes (usually RESTful HTTP) network requests to a backend cloud service with a data payload containing the user's prompt with the goal of invoking an LLM for completions and answering.
So I opened the Network
tab in Firefox's developer tools interface to monitor what requests are being sent and what responses are being received.
I also opened the console
tab to quickly execute JavaScript scripts and snippets to test things out along the way.
I made sure to log out because I want an API that works even if one is not logged in, just like IAN's web UI. I didn't want the state/behavior of the web UI to be unintentionally influenced by my user account's credentials in any way.
With the Network
tab open, I first refreshed the page to bench what requests are made at the time of First Contentful Paint (FCP).
A couple of HTTP GET
requests to fetch media and code files, nothing special.
Of course it wouldn't be anything special because IAN will only make a HTTP call for LLM inference when we submit a prompt in the web UI.
So let's do that.
I prompted IAN as follows
Can you write me a HTTP GET request in C++? If you're using any external libraries, please remember to include the relevant CMakeLists.txt and any other Make/CMake files that may be necessary.
As I submitted the prompt, two HTTP calls - a POST
request and an OPTIONS
request - were made to a domain called oraloom.com
The OPTIONS
request completed near-instantly. The POST
request, on the other hand, took a good 30 seconds to complete and transferred about 7 KB of text/plain
data back and forth, which could be the prompt and its corresponding response strings.
The OPTIONS
request's only purpose is to return the list of headers and methods that are allowed by the server, so it's not of much use to us.
Instead, let's inspect the POST
request. Based on its HTTP verb (POST
), response latency, and payload size, it seems to be the request responsible for sending the prompt payload to IAN's backend.
In Firefox, you can right-click any network request in the developer tools and select Copy as Fetch
to copy a JS code snippet to your clipboard containing a fetch request call for the respective network request. Alternatively, you can also select Use as Fetch in Console
to automatically paste the snippet into the console.
Selecting Copy as Fetch
, let's copy this snippet to the clipboard.
await fetch("https://oraloom.com/prompt", {
"credentials": "omit",
"headers": {
"User-Agent": "Mozilla/5.0 (X11; Linux x86_64; rv:130.0) Gecko/20100101 Firefox/130.0",
"Accept": "*/*",
"Accept-Language": "en-US,en;q=0.5",
"Content-Type": "application/json",
"Sec-Fetch-Dest": "empty",
"Sec-Fetch-Mode": "cors",
"Sec-Fetch-Site": "cross-site",
"Priority": "u=4"
},
"referrer": "https://cyphae.com/",
"body": "{\"account\":\"guest\",\"wallet\":\"1\",\"user\":\"guest\",\"a1\":\"GPT-4o\",\"a2\":\"Perplexity\",\"a3\":\"Claude 3.5\",\"input\":\"Can you write me a HTTP GET request in C++? If you're using any external libraries, please remember to include the relevant CMakeLists.txt and any other Make/CMake files that may be necessary.\"}",
"method": "POST",
"mode": "cors"
});
Now let's paste this into the console and run it to see if it works out of the box without requiring any tweaks like changing the headers or providing any one-time hashes generated for this client-browser session.
On running this, I encountered a NetworkError
pertaining to the kind of headers CORS allows via Access-Control-Allow-Headers
when accessing remote resources.
Apparently, the priority
header is disallowed. That's okay, let's remove it from the POST
request and try again.
Damn, another disallowed header. Okay fine, let's remove user-agent
and try once more.
Oh, that's better.
It seems fetch
succeeded and obtained a HTTPResponse
object with a body containing a ReadableStream
object.
That is very promising. ReadableStream
s are commonly used to stream responses from LLMs over the internet to a client so this means we're on the right track.
Let's copy this JS fetch snippet over to Neovim and run it locally
P.S.: The
fetch
snippet usesasync/await
but I'm running a node version ofv20.15.1
, which does not support top-levelasync/await
as a stable feature, so I dumped the snippet into an enclosingasync function main()
and invoked it. I also calledawait response.text()
to synchronously parse theReadableStream
object in the body into a string before passing it intoconsole.log
.
Upon running this, it takes a while but eventually we receive a response consisting of a HTML document containing the LLM's response along with what seems to be markup and styling to render the response with any browser or web UI.
<html lang="en>
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Aggregated HTTP GET in C++ Guide</title>
<style>
body {
font-family: Arial, sans-serif;
background-color: black;
color: white;
padding: 20px;
}
h1 {
background: linear-gradient(to right, #60a5fa, #fdba74);
-webkit-background-clip: text;
color: transparent;
}
.section-title {
text-decoration: underline;
font-size: 1.2em;
}
.agent-box {
border: 1px solid #60a5fa;
padding: 10px;
margin-bottom: 20px;
}
code {
display: block;
white-space: pre-wrap;
background-color: #333;
padding: 10px;
border-radius: 5px;
margin-top: 10px;
margin-bottom: 10px;
}
</style>
</head>
<body>
<h1>Aggregated HTTP GET in C++ Guide</h1>
<h2 class="section-title">Introduction</h2>
<p>This guide provides examples and build instructions on how to perform an HTTP GET request in C++ using different libraries. We'll cover libcurl, cpp-httplib, and HTTPRequest. Each exa
mple includes consistent information while acknowledging any informational differences and highlighting each approach's strengths and weaknesses.</p>
<h2 class="section-title">Consistent Information</h2>
<p>The common approaches for making HTTP GET requests in C++ involve using external libraries such as <code>libcurl</code>, <code>cpp-httplib</code>, or <code>HTTPRequest</code>. The set
up involves writing the C++ code and configuring the build system using CMake.</p>
<h2 class="section-title">C++ Code Examples and Build Instructions</h2>
<div class="agent-box">
<h3>Instructions Using libcurl</h3>
<code>
// main.cpp<br>
#include <iostream><br>
#include <curl/curl.h><br><br>
size_t WriteCallback(void* contents, size_t size, size_t nmemb, std::string* output) {<br>
size_t total_size = size * nmemb;<br>
output->append((char*)contents, total_size);<br>
return total_size;<br>
}<br><br>
int main() {<br>
CURL* curl;<br>
CURLcode res;<br>
std::string response_string;<br>
curl_global_init(CURL_GLOBAL_DEFAULT);<br>
curl = curl_easy_init();<br>
if (curl) {<br>
curl_easy_setopt(curl, CURLOPT_URL, "http://www.example.com");<br>
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, WriteCallback);<br>
curl_easy_setopt(curl, CURLOPT_WRITEDATA, &response_string);<br>
res = curl_easy_perform(curl);<br>
if (res != CURLE_OK) {<br>
std::cerr << "curl_easy_perform() failed: " << curl_easy_strerror(res) << std::endl;<br>
} else {<br>
std::cout << "Response data: " << response_string << std::endl;<br>
}<br>
curl_easy_cleanup(curl);<br>
}<br>
curl_global_cleanup();<br>
return 0;<br>
}
</code>
<h4>Build Configuration (CMakeLists.txt)</h4>
<code>
cmake_minimum_required(VERSION 3.10)<br>
project(HttpGetExample)<br><br>
set(CMAKE_CXX_STANDARD 11)<br><br>
find_package(CURL REQUIRED)<br><br>
add_executable(HttpGetExample main.cpp)<br>
target_link_libraries(HttpGetExample ${CURL_LIBRARIES})<br>
include_directories(${CURL_INCLUDE_DIRS})
</code>
</div>
<div class="agent-box">
<h3>Instructions Using cpp-httplib</h3>
<code>
// main.cpp<br>
#include "httplib.h"<br><br>
int main() {<br>
httplib::Client cli("http://example.com");<br>
auto res = cli.Get("/path/to/resource");<br>
if (res) {<br>
std::cout << "Status: " << res->status << std::endl;<br>
std::cout << "Body: " << res->body << std::endl;<br>
} else {<br>
std::cerr << "Request failed: " << cli.get_error_message() << std::endl;<br>
}<br>
return 0;<br>
}
</code>
<h4>Build Configuration (CMakeLists.txt)</h4>
<code>
cmake_minimum_required(VERSION 3.10)<br>
project(HTTPLibExample)<br><br>
set(CMAKE_CXX_STANDARD 14)<br><br>
add_executable(${PROJECT_NAME} main.cpp)<br>
target_include_directories(${PROJECT_NAME} PUBLIC ${CMAKE_CURRENT_SOURCE_DIR})
</code>
</div>
<div class="agent-box">
<h3>Instructions Using HTTPRequest</h3>
<code>
// main.cpp<br>
#include "HTTPRequest.hpp"<br><br>
int main() {<br>
try {<br>
http::Request request{"http://example.com/path/to/resource"};<br>
const auto response = request.send("GET");<br>
std::cout << std::string{response.body.begin(), response.body.end()} << '\\n';<br>
} catch (const std::exception& e) {<br>
std::cerr << "Request failed, error: " << e.what() << '\\n';<br>
}<br>
return 0;<br>
}
</code>
<h4>Build Configuration (CMakeLists.txt)</h4>
<code>
cmake_minimum_required(VERSION 3.10)<br>
project(HTTPRequestExample)<br><br>
set(CMAKE_CXX_STANDARD 17)<br><br>
add_executable(${PROJECT_NAME} main.cpp)<br>
target_include_directories(${PROJECT_NAME} PUBLIC ${CMAKE_CURRENT_SOURCE_DIR})
</code>
</div>
<h2 class="section-title">Informational Differences and Inconsistencies</h2>
<ul>
<li><strong>Libraries Used:</strong>
<ul>
<li><strong>libcurl:</strong> Recommended widely across sources for its robustness.</li>
<li><strong>cpp-httplib:</strong> Suggested by Perplexity, known for its simplicity and ease of use.</li>
<li><strong>HTTPRequest:</strong> Also highlighted by Perplexity for its simplicity.</li>
</ul>
</li>
<li><strong>URL Provided:</strong>
<ul>
<li><strong>General Usage:</strong> "http://example.com" (placeholder URL, most likely correct; commonly used for demonstration purposes).</li>
<li><strong>Specific Example:</strong> "https://api.example.com/data" (used by Claude 3.5, replace with actual endpoint).</li>
</ul>
</li>
</ul>
<p><strong>Likely Correct:</strong> The placeholder URL "http://example.com" is commonly used for demonstration purposes and should be replaced with the target URL as needed.</p>
<h2 class="section-title">Agent Strengths and Weaknesses</h2>
<div class="agent-box">
<h3>GPT-4o</h3>
<ul>
<li><strong>Strength:</strong> Provides a detailed example and CMake setup for libcurl.</li>
<li><strong>Weakness:</strong> Does not offer alternative library options.</li>
</ul>
</div>
<div class="agent-box">
<h3>Perplexity</h3>
<ul>
<li><strong>Strength:</strong> Introduces simpler library options like `cpp-httplib` and `HTTPRequest`.</li>
<li><strong>Weakness:</strong> Does not cover widely-used options like libcurl.</li>
</ul>
</div>
<div class="agent-box">
<h3>Claude 3.5</h3>
<ul>
<li><strong>Strength:</strong> Comprehensive explanation and setup for libcurl.</li>
<li><strong>Weakness:</strong> Uses a specific URL and does not mention simpler library options.</li>
</ul>
</div>
<h2 class="section-title">Building and Running the Project</h2>
<p>Follow these steps to build and run your project:</p>
<ol>
<li>Create a directory for the build:
<code>mkdir build && cd build</code>
</li>
<li>Run CMake to configure the build environment:
<code>cmake ..</code>
</li>
<li>Build the project:
<code>make</code>
</li>
<li>Run the executable:
<code>./<em>executable_name</em></code>
</li>
</ol>
<p><strong>Note:</strong> Replace <em>executable_name</em> with the name used in your CMake configuration.</p>
<p>If you need further assistance or have any questions, feel free to ask!</p>
</body>
</html>
Curious to see how this looks like, I dumped this into an index.html
file and opened it in Firefox to see the rendered output.
Impressive! Although the design choice to send back an HTML document may have been for internal reasons, I for one appreciate this approach. This way, building the web UI component of any application attempting to integrate with IAN becomes much easier.
Nice, so we have some working JS code that we can use as a starting point to build a more comprehensive client.
Building The Client
First, let's flesh out the JS snippet and tweak it to be more general by allowing it to accept external data as input to relevant parameters.
To do this, I just prompted IAN for the following
I have this js snippet sending a HTTP request and reading out the response to the console.
*js code pasted here*
I have a couple of modifications I want you to make.
- Make this code more general such that it accepts external data passed into it as input for parameters such as
account
,wallet
,user
,a1
,a2
,a3
,input
- Refactor the request functionality into a separate function called
postIANPrompt
and call this insidemain()
This was IAN's solution
async function postIANPrompt(url, account, wallet, user, a1, a2, a3, input) {
const body = JSON.stringify({
account,
wallet,
user,
a1,
a2,
a3,
input,
});
const response = await fetch(url, {
credentials: "omit",
headers: {
Accept: "*/*",
"Accept-Language": "en-US,en;q=0.5",
"Content-Type": "application/json",
"Sec-Fetch-Dest": "empty",
"Sec-Fetch-Mode": "cors",
"Sec-Fetch-Site": "cross-site",
},
referrer: "https://cyphae.com/",
body,
method: "POST",
mode: "cors",
});
if (!response.ok) {
throw new Error(`Response status: ${response.status}`);
}
return await response.text();
}
async function main(account, wallet, user, a1, a2, a3, input) {
const url = "https://oraloom.com/prompt";
try {
const result = await postIANPrompt(
url,
account,
wallet,
user,
a1,
a2,
a3,
input,
);
console.log(result);
} catch (error) {
console.error("Error:", error);
}
}
// Example usage:
main(
"guest",
"1",
"guest",
"GPT-4o",
"Perplexity",
"Claude 3.5",
"Can you write me a HTTP GET request in C++? If you're using any external libraries, please remember to include the relevant CMakeLists.txt and any other Make/CMake files that may be necessary.",
);
Quite comprehensive, and IAN also decided to include error handling when I didn't initially prompt it to do so, a welcome addition. This is also acknowledged in its aggregated analysis of the LLM outputs.
I implemented the changes and executed the latest version, which ran without any errors.
Alright, so far so good.
Upgrading to TypeScript
Now, JavaScript is nice, but TypeScript is a gift from the type theory gods for a better web development experience for everyone. So let's bring in the TypeScript compiler.
I also want to add some features to make this client more user-friendly. For example, each request takes at least 5-50 seconds to complete and we can make this waiting process less painful for the user with some kind of indicator in the UI experience.
So first, I used bun
to init a new project, and install the TypeScript compiler and bun
's types.
$ bun init
$ bun add -d typescript @types/bun
bun
scaffolds a couple of files for us and we end up with this file structure.
Next, I asked IAN to write me a full fledged TypeScript client keeping in mind to include the following features:
- Typings, interfaces, error handling, and schemas for DTOs.
- Abstract out the request into a class that can be instantiated to create multiple request objects.
- Typings and options for
a1
,a2
,a3
, i.e., parameters to specify which models to be used for the agent network when authenticated, with options being string ids from the list["GPT-4o", "GPT-4o-mini", "GPT-3.5", "Perplexity", "Gemini 1.5 Pro", "Gemini 1.5 Flash", "Gemini Pro", "Llama3.1-405b", "Llama3.1-70b", "Llama3-70b", "Qwen2-72B", "Mistral-8x22b-instruct", "Claude 3 Haiku", "Claude 3 Opus", "Claude 3 Sonnet", "Claude 3.5", "none"]
obtained from the IAN web UI. - Typings and options to specify IAN credentials such as
account
,wallet
,user
if the user has an IAN account. - Adding a loading indicator and logging the amount of time elapsed since the request was sent to indicate some kind of progress state to the user.
-
zod
and its schemas for data validation.
This required a decently comprehensive prompt.
You're an expert TypeScript developer who knows how to translate JavaScript code to TypeScript with appropriate improvements and modifications.
I have this JS code that sends a HTTP request to an LLM API and reads out the response to the console.
*test.js code pasted here*
I need your expertise and help with this.
Write me an analogous, full-fledged client for TypeScript keeping in mind to include the following features
- Typings, interfaces, error handling, and schemas for DTOs.
- Abstract out the request into a class that can be instantiated to create multiple request objects.
- Typings and options for a1, a2, a3, i.e., parameters to specify which models to be used for the agent network when authenticated, with options being string ids from the list
["GPT-4o", "GPT-4o-mini", "GPT-3.5", "Perplexity", "Gemini 1.5 Pro", "Gemini 1.5 Flash", "Gemini Pro", "Llama3.1-405b", "Llama3.1-70b", "Llama3-70b", "Qwen2-72B", "Mistral-8x22b-instruct", "Claude 3 Haiku", "Claude 3 Opus", "Claude 3 Sonnet", "Claude 3.5", "none"]
obtained from a website.- Typings and options to specify credentials such as account, wallet, user if the user has an account.
- Adding a loading indicator and logging the amount of time elapsed since the request was sent to indicate some kind of progress state to the user.
zod
and its schemas for data validation.You can split your code across multiple files if necessary, and feel free to import any libraries you think may be required.
Remember to write a comprehensive solution and give me each individual source file so that I can use it in my project.
IAN responded with an equally comprehensive response.
Aggregated TypeScript Client Implementation
Consistent Information
All sources consistently mention the creation of a TypeScript client for making HTTP requests, using zod for data validation, and handling errors appropriately. The structure of the implementation involves separate files for types and interfaces, the HTTP client class, and the main execution script.
Implementation Steps
1. Install Dependencies
npm install typescript zod node-fetch @types/node-fetch
2. Create TypeScript Configuration
{
"compilerOptions": {
"target": "ES6",
"module": "commonjs",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true
}
}
3. Define Types, Interfaces, and Schemas
// models.ts (aliases: dtos.ts, types.ts)
import { z } from 'zod';
export const modelOptions = [
"GPT-4o",
"GPT-4o-mini",
"GPT-3.5",
"Perplexity",
"Gemini 1.5 Pro",
"Gemini 1.5 Flash",
"Gemini Pro",
"Llama3.1-405b",
"Llama3.1-70b",
"Llama3-70b",
"Qwen2-72B",
"Mistral-8x22b-instruct",
"Claude 3 Haiku",
"Claude 3 Opus",
"Claude 3 Sonnet",
"Claude 3.5",
"none"
] as const;
export type ModelOption = typeof modelOptions[number];
export interface PostIANPromptRequest {
account: string;
wallet: string;
user: string;
a1: ModelOption;
a2: ModelOption;
a3: ModelOption;
input: string;
}
export const postIANPromptRequestSchema = z.object({
account: z.string(),
wallet: z.string(),
user: z.string(),
a1: z.enum(modelOptions),
a2: z.enum(modelOptions),
a3: z.enum(modelOptions),
input: z.string()
});
export const IANResponseSchema = z.object({
response: z.string(),
elapsedTime: z.number(),
});
export type IANResponse = z.infer;
4. Create HTTP Client for Requests
// HTTPClient.ts (aliases: request.ts, IANClient.ts)
import fetch from 'node-fetch';
import { PostIANPromptRequest, postIANPromptRequestSchema, IANResponse, IANResponseSchema } from './models';
import { z } from 'zod';
export class HTTPClient {
private url: string;
constructor(url: string) {
this.url = url;
}
async postIANPrompt(request: PostIANPromptRequest): Promise {
const validation = postIANPromptRequestSchema.safeParse(request);
if (!validation.success) {
throw new Error('Invalid input data');
}
const startTime = Date.now();
console.log('Sending request to IAN...');
try {
const response = await fetch(this.url, {
method: 'POST',
headers: {
'Accept': '*/*',
'Accept-Language': 'en-US,en;q=0.5',
'Content-Type': 'application/json',
'Sec-Fetch-Dest': 'empty',
'Sec-Fetch-Mode': 'cors',
'Sec-Fetch-Site': 'cross-site',
},
body: JSON.stringify(request),
credentials: 'omit',
mode: 'cors',
});
if (!response.ok) {
throw new Error(`HTTP error! status: ${response.status}`);
}
const rawResponse = await response.text();
const elapsedTime = Date.now() - startTime;
const validatedResponse = IANResponseSchema.parse({
response: rawResponse,
elapsedTime,
});
console.log(`Request completed in ${elapsedTime}ms`);
return validatedResponse;
} catch (error) {
console.error('Error in IAN request:', error);
throw error;
}
}
}
5. Main Execution Script
// index.ts (aliases: main.ts)
import { HTTPClient } from './HTTPClient';
import { PostIANPromptRequest } from './models';
async function main() {
const url = 'https://oraloom.com/prompt';
const client = new HTTPClient(url);
const request: PostIANPromptRequest = {
account: 'guest',
wallet: '1',
user: 'guest',
a1: 'GPT-4o',
a2: 'Perplexity',
a3: 'Claude 3.5',
input: 'Can you write me a HTTP GET request in C++?'
};
try {
const result = await client.postIANPrompt(request);
console.log(result);
} catch (error) {
console.error('Error:', error);
}
}
main();
Run the Application
npm run build
npm start
Inconsistencies and Informational Differences
Inconsistency in HTTP Libraries: GPT-4o and Claude 3.5 use node-fetch while Perplexity uses axios.
Inconsistency in Typings: Perplexity and Claude 3.5 define the types file as dtos.ts and types.ts respectively, whereas GPT-4o names it models.ts.
Agent Strengths and Weaknesses
GPT-4o
Strength: Provides a well-structured modular approach with clear definitions and explanations.
Weakness: Uses node-fetch which might require additional polyfills in Node.js environments.
Perplexity
Strength: Uses axios which is more feature-rich and often preferred for its simplicity in handling HTTP requests.
Weakness: Does not align with node-fetch used by other sources, leading to inconsistency.
Claude 3.5
Strength: Well-defined schemas and interfaces with clear separation of concerns.
Weakness: Similar to GPT-4o, uses node-fetch which might require polyfills and additional configurations.
Well, this looks good. IAN seems to have included something for all the features we specified. Let's install the dependencies, create the necessary files, and test this code out to see if it works out of the box again.
I created src/types.ts
, src/client.ts
, and src/main.ts
, pasted the respective code snippets into them, and stitched up any imports that were missing between the three files.
As soon as I pasted the code into the files, ESLint pointed out a couple of bugs IAN made that I had to fix by hand. Specifically,
-
IAN didn't specify the
z.infer
generic's concrete type which I deducted from the context of the rest of the code to betypeof IANResponseSchema
.
// BUG export type IANResponse = z.infer; // FIX export type IANResponse = z.infer<typeof IANResponseSchema>;
-
IAN didn't specify the
Promise
generic's concrete type which I deducted from the context of the rest of the code to beIANResponse
// BUG async postIANPrompt(request: PostIANPromptRequest): Promise { // FIX async postIANPrompt(request: PostIANPromptRequest): Promise<IANResponse> {
So I guess no zero-shot luck this time, but that's okay. It got most of it right and the bugs only required minor fixes. I guess even LLMs struggle with type systems lol.
Let's run this TypeScript client implementation with bun
.
$ bun run src/main.ts
We initially get a Sending request to IAN...
message.
Then, the response is displayed in the console, which now also contains an elapsedTime
property representing the number of milliseconds it took to complete the request.
Wrapping up and what's next
In my opinion, everything came together much faster than expected. Without IAN's assistance, it would have taken much longer to get to this point with all the time spent looking up documentation, stiching together code snippets, and what not.
I tested this client a bit more by specifying different combinations of agents and even adding my own user credentials to see if that works.
Finally, I refactored the project to store the progress inside an ianos/clients/js-ts
folder, did my source control due dilligence, and pushed the repository to GitHub, which you can find at bkataru/ianos
.
Next up in the series, let's continue to reverse engineer IAN with IAN and build ianOS along the way by creating:
- Python, Dart, Go, and Rust clients.
- a React-based responsive and performant web UI.
- a Python-based LLM agent framework with capabilities such as tool-calling and chain of thought.
- a Flutter mobile app to interact with IAN on Android/iOS.
- a Go CLI binary that works with the local filesystem in order to read in files as context before prompting IAN.
- a Rust desktop application using
Dioxus
.
Top comments (0)