The happiest - and truly joyful - developers are the ones who can find and stay in their flow state the longest. Where all friction is removed, and all focus is on creative problem-solving instead. There are many ways for developers to find their flow (Atlassian has written about how they approach it), but the foundation that underpins any flow state is the understanding and context around work that needs to be done and the codebase on which work needs to be performed.
Atlassian offers fantastic products that help you understand and contextualize work that needs to be done, and over the last ten years, Sourcegraph has been helping enterprises with large, complex codebases understand and contextualize their code. We're proud to count Atlassian as a customer who uses our Code Search to help them build the products you use every day. In combination, we’ve seen many customers use Atlassian and Sourcegraph products to improve developer joy and, as a result, productivity.
AI coding assistants promise a faster way of helping developers find their flow. But for all the hype surrounding them, the foundation that makes them successful remains the same - the more understanding and context of your code the AI has, the more helpful it is to a developer.
Cody is an AI coding assistant that pairs our decades-long experience helping enterprises search, navigate, and understand their code with the world’s leading LLMs to help developers write and fix code and find their flow faster.
Cody’s strength lies in three areas:
Context: Cody utilizes our experience understanding and navigating some of the largest codebases to help developers understand, fix, and write code incredibly accurately. Cody is capable of understanding codebases as large as 600k repos.
Choice: Atlassian’s ‘open toolchain’ approach mirrors Cody, built with interoperability in mind, and works seamlessly across code hosts like Bitbucket in Data Center and Cloud deployments. This philosophy extends to LLM choice, where we eschew proprietary LLMs to embrace the ever-improving LLMs from vendors such as OpenAI and Anthropic while still providing privacy, security, and uncapped indemnity.
Scale & Security: Zero-retention data policies, no model training, and uncapped indemnity alongside support for secure environments like AWS Bedrock and Azure OpenAI allows Cody to meet the security and compliance needs of even the most heavily regulated and secure companies like Qualtrics and Leidos.
Cody gets context from your Bitbucket repos in your IDE and provides context-based code suggestions. For Enterprise users, Cody can retrieve context from your team's entire codebase in Bitbucket Cloud or Data Center at any scale. It can infer context based on the repo you have open in your IDE, or you can mention repos in chat to provide additional context for its answer.
Cody supports popular IDEs like VS Code and JetBrains, as well as the most popular programming languages. To get started, sign up and download Cody for your IDE. Create a new branch from your Jira issue and discover ways Cody can help you get into your flow state below:
The two main ways developers interact with Cody in their IDE are via autocomplete and chat. Autocomplete suggests code completions while you type in the editor. Chat allows you to ask questions about your code, and Cody can take the files mentioned or its existing knowledge of your codebase to help you understand things like how a repo is structured or why your code isn’t working.
As much as it can be a chore for developers, writing tests and ensuring adequate test coverage ensures you’re building reliable and secure software. Cody makes it easy to write unit tests using commands or natural language prompts.
Debugging not only cleans up existing code but also helps ensure it can be more easily managed in the future. This video walks through an example of improving readability, efficiency, and docs using Cody’s suggestions to guide enhancements.
In addition to the pre-built commands that come with Cody, custom commands allow you to create and define reusable prompts tailored to your preferences and work style.
Cody works seamlessly with Bitbucket Cloud and Data Center and offers a free plan powered by Claude 3 Sonnet, one of the best-performing LLMs in the market. Try it out today, and let us know what you think!
Bonus: Why is context important?Just how important is context for AI coding assistants? With our LLM Litmus Test, you can see for yourself. This test allows you to quickly see how various LLMs perform when given a coding task, with the ability to pass an open source repository through as context. As you can see in the image below, asking “What is squirrel?” to GPT-4 Turbo and Claude 3 Sonnet - but only passing through the open source squirrel repository as context to Claude 3 - provides results of different depth and accuracy. Try it out yourself!
|
Kelvin Yap _Sourcegraph_
Senior Product Marketing Manager
Sourcegraph
San Francisco, CA
0 comments