Development of a SaaS idea generator using AI

Development of a SaaS idea generator using AI

Introduction

In a world where innovation is paramount, the use of artificial intelligence (AI) to stimulate creativity and generate new ideas has become a fascinating approach. This ambitious project aimed to create an innovative system capable of generating SaaS (Software as a Service) ideas using two AIs in conversation. The main objective was to explore the potential of AI in the field of idea generation while optimizing usage costs, a crucial aspect to make this technology accessible to a larger number of companies and entrepreneurs.

Concept and Implementation

The core of the system relies on an ingenious concept: engaging two distinct AIs in dialogue, each with a specific role in the creative process. The first AI is responsible for generating innovative SaaS ideas, while the second takes on the role of critic, analyzing and evaluating each proposal. This virtual dialogue continues iteratively until an idea deemed sufficiently promising emerges.

This approach fascinatingly simulates a real brainstorming process, where ideas are constantly refined and improved through constructive feedback. It allows for exploring a wide range of possibilities while benefiting from immediate critical evaluation, thus accelerating the innovation process.

Technological Choices

In the interest of efficiency and cost control, the OpenAI 4o-mini model was selected to power both AIs. This choice is based on its excellent performance-to-cost ratio, offering advanced natural language processing capabilities while remaining economically viable for an experimental project.

To further optimize the use of this model and reduce operational costs, an innovative batch approach was implemented. This method allows grouping multiple queries into a single API call, significantly reducing the number of interactions with the OpenAI service and, consequently, the associated costs.

Here's an excerpt of the code illustrating this batch approach:

def openai_question_batch(client, questions, model='gpt-4o-mini', prompt_system='You are a helpful assistant.'):
    input_file = 'file/batch_input.jsonl'
    
    with open(input_file, "w") as f:
        for i, question in enumerate(questions):
            f.write(f'{{"custom_id": "request-{i}", "method": "POST", "url": "/v1/chat/completions", "body": {{"model": "{model}", "messages": [{{"role": "system", "content": "{prompt_system}"}},{{"role": "user", "content": "{question}"}}],"max_tokens": 1000}}}}\n')
    
    batch_input_file = client.files.create(
      file=open(input_file, "rb"),
      purpose="batch"
    )
    
    res_submit_batch = client.batches.create(
        input_file_id=batch_input_file.id,
        endpoint="/v1/chat/completions",
        completion_window="24h",
        metadata={
          "description": "nightly eval job"
        }
    )
    
    # Code to wait for and retrieve results

This openai_question_batch function perfectly illustrates the batch processing method. It prepares a JSONL file containing multiple requests, then submits this file in a single operation. This approach allows for efficiently processing multiple questions in parallel, thus reducing overall processing time and costs associated with individual API calls.

Dialogue Structure

The dialogue between the two AIs is structured in several iterations, each constituting a step in the process of generating and evaluating ideas. The main program loop orchestrates these exchanges, allowing for progressive refinement of the proposed concepts.

Here's an excerpt of code illustrating the main dialogue structure:

def main():
    for attempt in range(MAX_ATTEMPTS):
        conversation_history = deque(maxlen=5)
        iteration = 1
        success = False
        
        while iteration <= MAX_ITERATIONS:
            reponse_idee = openai_question_batch(client=client, questions=[question_idee], prompt_system=prompt_system_idees)[0]
            
            reponse_critique = openai_question_batch(client=client, questions=[question_critique], prompt_system=prompt_system_critique)[0]
            
            score = extract_score(reponse_critique)
            
            if score >= 9:
                success = True
                break

            iteration += 1

This code simulates a brainstorming process between the AIs, each having a specific role in generating and evaluating ideas. The main loop manages multiple attempts (MAX_ATTEMPTS), each composed of several iterations (MAX_ITERATIONS). In each iteration, an idea is generated and then criticized, and a score is extracted from the critique. If the score reaches or exceeds 9, the process is considered a success and terminates.

Technical Challenges and Solutions

The implementation of this system presented several interesting technical challenges, requiring creative solutions:

  1. Conversation Coherence: The use of batch mode required careful design of system prompts for each AI to maintain dialogue coherence. Prompts were crafted to include sufficient context, allowing each AI to maintain conversation continuity despite the absence of a real-time exchange.
  2. Score Extraction: Quantitative evaluation of ideas required reliable extraction of numerical scores from textual critiques. To address this challenge, a specific extraction function was developed:
def extract_score(critique):
    score_pattern = r'(\d+)(?:/10)?'
    match = re.search(score_pattern, critique[:50])
    if match:
        score = int(match.group(1))
        return min(score, 10)
    return 0

This function uses a regular expression to identify and extract the numerical score from critiques, focusing on the first 50 characters to optimize the search.

  1. Conversation Memory Management: To allow AIs to refer to previous ideas and critiques without overloading the context, a short-term memory system was implemented using a deque data structure with a fixed maximum length.

Results and Observations

The project demonstrated the ability of AIs to generate and evaluate ideas autonomously. The batch approach proved effective in reducing the usage costs of language models without compromising the quality of results.

The generated ideas showed impressive diversity, covering a wide spectrum of application domains for SaaS. Some proposals stood out for their originality and disruptive potential, illustrating the system's ability to think "outside the box."

Conclusion and Perspectives

This AI-based SaaS idea generation system illustrates the potential of language models as tools for aiding creativity and innovation. Cost optimization via batch mode opens up perspectives for broader applications of AI in contexts where resources are limited.

Future improvements could include refining prompts, integrating more detailed market data, and potentially adding a third AI to simulate the role of an investor. These developments could further enhance the relevance and quality of generated ideas, making this system a valuable tool for entrepreneurs and innovation teams.

Read more