Building Enterprise Large Language Model Applications


November 11


10:00 am - 05:30 pm

Click to Register:



Hacker Dojo

855 Maude Ave, Mountain View, CA 94043

Mountain View, CA, US, 94043

In this full-day workshop, we will cover the foundations of Large Language models and using custom dataset applications


This full-day workshop provides a comprehensive introduction to building AI applications with large language models. Attendees will learn the foundations of models like GPT-3.5/GPT4 and LLaMa2, including how they work, how to access them, and best practices for fine-tuning and prompting. A key part of the day will involve hands-on work with custom datasets to train models on specific tasks and document types. We’ll cover gathering quality data, cleaning and labeling, choosing model architectures, prompting techniques, and evaluating performance. The workshop wraps up with deployment strategies, including hosting models locally, leveraging APIs, monitoring, and maintaining production systems. Participants of all backgrounds are welcome. The material will cater to beginners while still diving deep on topics critical to real-world language model adoption.

Some key recent topics to cover could include chain-of-thought prompting, an approach to conversational AI; reinforcement learning from human feedback for improved answers over time; and cross-domain transfer learning to leverage models trained in one domain for new domains with limited data.

The course is offered in-person to enchance your learning experience.


Large language models like OpenAI’s GPT-3.5/GPT4, Meta (Facebook)’s LLaMa v2, Google’s Palm, and many more have sparked a new wave of AI capabilities, enabling more natural language processing, text generation, and code writing than previously possible. However, effectively leveraging these models requires specialized knowledge around model architectures, training approaches, prompting techniques, and infrastructure. At the same time, access to foundation models is expanding through APIs from companies like Anthropic, Cohere, and HuggingFace. This democratization opens up AI augmentation for a much broader audience.

There is a major need to equip developers, data scientists, and other practitioners with the capabilities to build impactful AI solutions powered by language models. This workshop aims to make large language model adoption more accessible by providing both a 101-level introduction and a deep dive into topics critical to real-world application development. Participants will gain hands-on experience while learning best practices around datasets, training, evaluation, prompting strategies, and deployment of AI systems. Our goal is to empower attendees to leave ready to utilize these transformative models within their own organizations and domains.

Event description

This full-day workshop is intended to teach you what open source models like LLaMa and closed-source models such as OpenAI’s GPT3.5 turbo and GPT4 can be utilized for building applications.

During the morning session you will focus on LLM fundamentals. Via hands-on exercises and notebooks you will explore open-source and closed-source LLM APIs that allow you to run Python scripts to interact programmatically with the models.

In the afternoon session we will begin building the chatbots with custom datasets. You will also learn about approaches to debugging, promp-engineering, and methods of fine-tuning using RAG (retrieval augmented generation).

NOTE: Attendees will have access to the full Deep Learning infra for training AI models and deploying at scale. There is a nominal charge for the full-day of compute and API access to language models hosted by Huggingface or OpenAI. Registration also includes a 1-year SFBay ACM membership ($20 value).

Interactive notebooks, hands-on exercises, slides and QA sessions will help you understand relevant concepts, APIs and best practices.

Check the event agenda below for more details.

Access to the training materials

You will have access to the dedicated GitHub repository with all training resources.

You will be provided with a dedicated Anyscale compute cluster that you will use for the duration of training. After the event, you will always be able to run Ray on your laptop with the training material on the Github repo.


    • SFBay ACM Prof Dev Chair: Yashesh Shroff

Lunch, snacks, coffee, and community camaraderie included.

Modules 1-2: Fundamentals of Language Models

Modules 3-4: Deployment of LLM applications

Organizer & SFBay ACM Prof Dev Chair: Yashesh Shroff @yashroff

For more information about Registration, please contact SF Bay Chapter of the ACM, yshroff at g | m | a i l

We look forward to seeing you at the workshop!

Leave a Reply

Your email address will not be published. Required fields are marked *