By day, Allen is a Senior Project Engineer at http://spiders.com/ where he’s been instrumental in creating websites and mobile apps for companies and organizations, ranging from the American Booksellers Association to the National Science Foundation. By night he creates tools, software, and tutorials to help people share their stories and improve their digital lives. He’s also a contributor to the LangChainJS open source project and co-author of the O’Reilly book called “Designing and Developing for Google Glass”. He is a Google Developer Expert for Machine Learning, Google Workspace, and the Google Assistant and has been known to occasionally wear light blue shirts.
SESSION
Using LLMs to bridge the Fuzzy Human / Digital Computer Boundary – tools for EVERY developer
Large Language Models, such as Google’s PaLM model, have taken the world by storm. And while many people have fun with these Generative AI models – they are rapidly also becoming a tool that developers can now use, just like we use databases or GUIs.
But do you need to be a Machine Learning expert to use them? Not anymore! No matter what skill level you are as a developer, as long as you can use REST APIs, you can tap into the power of LLMs. We’ll see how to use this tool can serve two big roles:
- Helping turn “fuzzy” human thinking into more discrete structures traditional programs can use
- Taking data structures that we are familiar with and turning them into human understandable output
By the end of this presentation, people will have:
- Learned what the PaLM model is and how it fits into Google’s AI strategy
- Gained an insight how LLMs can be used in a wide range of applications
- Seen concrete examples of using LangChainJS, a popular library for JavaScript and TypeScript, to access Google’s PaLM model through the MakerSuite PaLM API and/or the Google Cloud Vertex AI API.
- Understood why you, yes you, can and should learn to use LLMs as a tool