Lessons learned from our GenAI boot camp and hackathon

Georgian
Georgian Impact Blog
10 min readAug 8, 2023

--

By: Azin Asgarian, Chelsea Lefaivre & David Tingle

With generative AI (GenAI) development moving quickly, the Georgian team held a GenAI boot camp for our portfolio companies (our customers) to help them kickstart their own GenAI journey. In this blog, we aim to share our takeaways from running the boot camp.

The Genesis of our GenAI Boot Camp

At Georgian, our 30+ person R&D team provides support to our companies as they look to adopt emerging technologies like GenAI and build new and differentiated products or product features. The team runs hackathons (which typically run for two to three weeks) and more in-depth engagements that span several months.

In our typical hackathon model, we partner with a single company to address a specific issue the team is tackling. From our experience participating in the Vector Institute’s event Prompt Engineering Lab, we recognized the value of a group-oriented model delivered. The approach encouraged collective learning and cross-pollination of ideas. Since GenAI is evolving so quickly and much of the knowledge, concepts and code could be relevant across different companies’ use cases, we decided to compress the timeline and work with many of our customers at the same time in our very first GenAI boot camp.

In this boot camp, we aimed to provide education and support for 22 teams from 21 companies across our portfolio and companies in CoLab — our pre-investment program for early-stage startups — with the goal of educating our companies on GenAI and supporting them as they took a GenAI use case from ideation into implementation in a matter of days.

Benefits of a GenAI Boot Camp

By adapting our typical offerings to this new format, we were able to accomplish our two goals:

  1. Educating companies on GenAI: The boot camp was designed to boost the participants’ understanding of GenAI and offer them a clear perspective into the potential possibilities and limitations within this domain.
  2. Scaling our hackathon model to help 21 companies simultaneously: The boot camp set out to provide guidance on how to identify, structure and prioritize diverse opportunities aligned with each company’s unique business needs. The format allowed teams to implement their proofs of concept (POC) within just a few days, enabling them to promptly witness the results of their efforts. By scaling our usual 1:1 hackathon model this way, we were able to be more efficient and productive.

Planning a Successful GenAI Boot Camp

The Organizing Committee

Given the tight timeline, we realized that the success of the boot camp would stem from having representatives from various parts of our business. Our core team was the same cross-functional team that organizes Transferred Learnings, a community for learning about Applied AI. This team includes technical expertise, marketing acumen and community management — all important functions for initiatives of this nature. This core group was augmented with 11 other team members from across Georgian.

In addition to our internal team, Deval Pandya, David Emerson and others at the Vector Institute helped us with planning. Having gone through a similar experience, they were able to share knowledge and practical insights, which helped in the execution of the boot camp.

Boot Camp Structure

By the end of the boot camp, we wanted the participants to have a solid foundation in GenAI and be equipped with the skills to apply the technology to their own use cases. To that end, we structure the boot camp across three phases:

  1. Use case identification and evaluation
  2. Tutorials to educate and access to tools and demos
  3. Hackathon for building prototypes

In the initial phase, members from our Product and AI teams worked with each team, to clarify their use case and product strategy.

Since our participants had different levels of experience with Large Language Models (LLMs), the next phase provided educational presentations. Georgian practitioners and others in the GenAI ecosystem — including the Vector Institute, Google and Private AI — acted as our tutors.

During the tutorial sessions, we covered introductory materials on LLMs. We complemented these tutorials with demo notebooks and code assets. In addition, machine learning practitioners were on hand during open office hours to answer queries and address any challenges the teams encountered.

In the final phase, we provided one-on-one technical support to help teams create rapid prototypes for their use cases. By giving personalized support for their specific technical needs we aimed to accelerate the prototyping process.

Inviting Participants

The teams were divided in two categories depending on their GenAI maturity:

  1. Education: If teams were earlier in the GenAI journey, they had the option to attend the educational portion without committing to a hackathon project.
  2. Hackathon: If teams had the time, resources and GenAI experience to take on a hackathon project, they would be given extra support from the technical team.

For teams participating in the hackathon portion, teams generally included different backgrounds and perspectives to help with ideation and implementation of feasible GenAI solutions. Typically, teams included a blend of technical team members (who were familiar with Python and machine learning concepts) and at least one product manager (who had an understanding of the business’s internal and customer processes).

Use Case Identification

While our boot camp use cases were mainly focused on natural language processing (NLP), for GenAI broadly, you can differentiate between text, speech and image (computer vision) applications.

In our case, we prompted our teams to think about different applications, including:

  1. Asset Generation: GenAI is used to create new assets such as code, images, audio and text. This application encompasses a wide range of sub-applications and use cases, and it forms the core of many emerging companies in this space.
  2. Conversational UI: GenAI enables conversational interfaces, including chatbots. Conversational UIs serve as important interfaces for complex products, knowledge bases and processes, providing a flexible and user-friendly method of interaction.
  3. Knowledge Retrieval: GenAI models, paired with vector databases, are effective for knowledge retrieval. These models can understand and surface relevant information from various data sources like documentation, FAQs or help desk materials. While there is overlap with conversational UI, knowledge retrieval is distinct in its focus on surfacing information.
  4. Text Utilities: Language models excel at understanding inputs, and text utilities leverage this ability. Tasks like sentiment scoring, extracting meta-information, translation and summarization fall into this category. Text utilities enable data analysis and insights by utilizing the capabilities of language models.
  5. Task Execution: GenAI models can be used to perform specific tasks within software environments. This application is an emerging area where models can execute activities that would typically be done through other means. For example, in analytics, these models can query and analyze datasets based on natural language prompts, freeing analysts from manual tasks to do higher value work. Additionally, models can interact with other software systems, executing specific actions and tasks based on predefined parameters. While still in its early stages, task execution presents an interesting area to explore for leveraging the capabilities of GenAI models in software environments.

A Framework to Evaluate GenAI Projects

Once the teams had identified potential projects, we used the evaluation framework below to prioritize:

  • Desirability: Does your proposed project solve a high-priority pain point or enable an opportunity for your customers? Assess market desire and the overall impact the project will have.
  • Viability: Consider economic value creation for the business. Does the project offer more upside than costs, considering factors such as margins and deployment expenses?
  • Feasibility: Is the project technically feasible to build? Consider maintenance and support over the long-term along with market implementation, including sales potential.

Support for Participants

The Georgian R&D team offered technical support for teams in two formats: drop-in office hours and 1:1 bookable time slots.

Because the development period was fairly short — only two dedicated days — these support touch points were important for teams to make progress on their prototypes.

Most of the support requests fell into these three areas:

  1. Ideation and problem scoping: In several cases, the original use cases were too broad to accomplish in the timeline, and teams needed help to break it down into smaller tasks/scope it into the boot camp timeline.
  2. Setup and tooling: Another common request was how to use specific tools, like LangChain and PandaAI, or how to set up their environment.
  3. Prompt engineering and customizing LLMs: The third common support request was helping to customize LLMs or to tweak their prompts. Often this type of request involved taking one large task that the LLM did passably and creating multiple smaller tasks that the LLM could handle better.

Trends We Noticed

Based on our previous hackathons and workshops, we came into this engagement with a pretty good idea of what to expect. Here are some trends we noticed that surprised us!

Use Case Ideal Types vs. Combinations
Many teams combined use cases from the list above in interesting ways (e.g., a chatbot leveraging a vector database).

Analytics Is a Recurring Theme
One combination we saw several times was related to analytics: specifically, using a combination of a natural language interface, a vector database to store embeddings for underlying data and a prompt chain that generated and then executed analytics queries to explore or analyze that data using language.

Chat Interfaces Come to the Fore
Many teams leveraged the power of natural language interfaces to build better experiences for users. However, chat functionality alone doesn’t make a product successful. The most successful applications of chat, in our experience, are typically linked to a specific purpose or goal for the conversation, one that matters for the user. Helping the user solve key challenges or unlocking new capabilities is essential for finding product-market fit for your conversational interface.

Internal and External Use Case Balance
Many teams scoped and built customer-facing POCs, but a number of teams also built internal-facing functionality that automated or augmented key processes for colleagues in their organization.

Teams Built More (And Faster) Than We Anticipated
Going into the boot camp, our objective was to make technical development / building a POC as easy as possible. With that said, we expected that many participants would focus on learning and brainstorming rather than coding. We were surprised to see the vast majority of teams submit at least one POC for the competition at the end of the boot camp, which shows how fast development can happen in the era of GenAI and how accessible the technology is.

Edge Cases Matter
Looking across the teams’ experience, it would appear that building a GenAI use case that gets 80% of the way towards viable functionality is relatively straightforward, but solving the edge cases and failure modes in the last 20% takes a lot of effort. Many teams spent more time than expected resolving these types of issues.

Continued Progress
After the boot camp ended, we were happy to see teams continuing to work on their projects. Some teams even shared that they were planning to run similar hackathons with their own teams to continue their learning and development.

Recognition

We recognized three teams for their projects:

  • The Impactful Innovator Award: Awarded to the team that has developed a feature that has the potential for positive, real-world impact and is closest to production readiness. This award underscores the important intersection of practicality and large-scale impact.
  • The Pioneering Prodigy Award: Awarded to the team whose AI features are novel, innovative and “out of the box”. This award emphasizes the creative and forward-thinking aspects of the hackathon.
  • The Ethical Engineer Award: Awarded in recognition of the team that incorporates responsible AI practices into their project, such as fairness, privacy or transparency considerations. This award reflects the importance of ethical considerations in AI development.

Lessons Learned

By bringing in experienced practitioners from across the various Georgian teams and our network, we believe that we were able to deliver a smooth experience for our participants. All the teams boosted their adoption and understanding of a rapidly evolving technology and produced prototypes that, in our view, could potentially evolve their own products or business practices.

Specifically, our teams appreciated having pre-made examples, demos and notebooks to ground their learning. Access to LLM platforms like OpenAI GPT-4, Google and Hugging Face models like Falcon 40B also helped teams accelerate their projects. They noted that easy file access, tutorial recordings they could watch and share asynchronously and consistent communication helped make their boot camp experience a good one.

We also had a few learnings of our own! In the future, we plan to offer a standalone session dedicated to environment setup before the boot camp to get people set up early. We also plan on developing a more robust progress tracking system so that we can track company stage and maturity to tailor the support we offer.

Special Thanks

Thanks to our guest speakers David Emerson from the Vector Institute, Erik Saarenvirta from Google, Michael Young from Private AI. Their help and support with office hours for the participants, access and guidance on using their tools was appreciated.

Also thanks to the Georgian team for supporting this project — especially to our Georgian presenters Akash Saravanan, Alex Manea, Angeline Yasodhara, David Tingle, Eli Scott, Rodrigo Ceballos Lentini, Rohit Saha, Royal Sequiera.

Disclosures. The material and information presented here is for discussion and general informational purposes only and is not intended to be, and should not be construed as, legal, business, tax, investment advice or other professional advice. The material and information do not constitute a recommendation, offer, solicitation or invitation for the sale of any securities, financial instruments, investments or other services, including any securities of any investment fund or other entity managed or advised, directly or indirectly, by Georgian or any of its affiliates. Past performance is not an indication of future performance and may not be repeated. Any forward-looking statements, including estimations, expectations, projections, forecasts or predictions, such as anticipated outcomes, proceeds or performance (“Projections”), are for illustration purposes only and should not be relied on and do not reflect any actual outcomes, proceeds or performance, which will be materially higher or lower than the Projections. There can be no assurance that the Projections will be attained. The information and materials herein are as of August 4, 2023 unless otherwise indicated.

--

--

Georgian
Georgian Impact Blog

Investors in high-growth business software companies across North America. Applied artificial intelligence, security and privacy, and conversational AI.