This is the story of a team from several departments, different countries, different native languages – and superpowers.
This is the story of a team that has never worked together in this constellation before.
A team that went through a stressful time but also had a lot of fun and learned a lot. And in the process, developed an AI app for Jira that is meant to provide a better work-life balance for all Jira users in the future.
To participate in this year's Codegeist, some of Actonic's talents joined forces and developed an app, the Jira AI Planner, in just about three weeks.
This year, we wanted to solve a problem as old as humanity itself: stress.
We all know that uneasy feeling on Sunday evening: We're already worrying about the coming workweek. What are the most important tasks? What should I do and when? When are the deadlines?
With an AI-driven app for Jira, we want to achieve a better work-life balance for all users.
Work should be fun again. Because often the most difficult thing starts before the actual work: prioritization. Our app takes this work off your hands and suggests a work plan that you can customize. This leaves you more time for the actual work, and you can fully enjoy your free time and weekends.
The user manual and further details on the prioritization logic can be found in our documentation. Here we would like to tell you how we developed the app and not so much how it works.
Since everyone brought their specific experience to this project, and our story is a team story, it's also told from the perspective of multiple people.
Every hackathon is a challenge for development. To ensure that our developers receive more moral support while “coding through the night,” we have called for people to participate throughout the company.
Our powerful group consists of these members:
Name |
Role at Actonic |
Role in this project |
Superpower |
---|---|---|---|
Andrei Pisklenov |
Head of Development |
Big Brother watching (and helping) you |
Listening to the team's crazy ideas for a while before helping them with valuable tips – and serving as best backup ever for any question |
Rustem Shiriiazdanov |
Product Owner Report Builder |
Business Analyst, AI prompt engineer |
Creating a first-class proof of concept, finding solutions, taking the lead in the event of technical uncertainties and always being there for teammates |
Lisa Hörig |
UX / UI Design expert |
Designer and Analyst |
Keeping the mission and vision in mind, requirements management and engineering, helping to maintain the high mood in the team and jumping in for video editing and design tasks |
Vladislav Komanchy |
Developer |
Team Lead of the Jira AI Planner Dev Team |
Originator of the app idea and humble genius! Designed and implemented core business logic and all API-related work. Made sure different components of the app work seamlessly together. Led the team of developers and QA engineers to victory! |
Zhenya Elfimova |
Product Owner Timesheet Builder |
Graphic Design |
Fearless in tackling new challenges and designing the app icon, and pictures for future app placement in the marketplace! |
Igor Pisklenov |
Developer |
QA engineer, AI prompt engineer |
Ensuring the best quality control of any application – finding errors where there are none! |
Patricia Modispacher |
Marketing Manager |
Project Owner from the second week on |
Keeping everyone in the real word by reminding them of deadlines, crafting detailed documentations, articles, demo videos and publishing the submission |
Daniil Tkachenko |
Developer |
Development |
Guru of task prioritization practices: Found the best way to implement the Eisenhower matrix. Daniil also implemented a brilliant-looking front-end part of the application. |
Maksim Grebenjuk |
Developer |
Development |
Master of light-speed software development, Maksim easily implemented the challenging Calendar view for the front-end while simultaneously paying close attention to details. |
Nikoloz Surmanidze |
Product Owner of Data Protection and Security Toolkit |
Assistance |
Helping the team find the right track by jumping in with oddly strange questions, which turned into brilliant design decisions! If you want to design an AI app – call Nik! |
Andreas Springer |
Head of Marketing |
Project Owner in the first week |
Enabling the team to unfold their full potential |
As you can see, most of us had to deal with new roles and tasks. But it's best to read for yourself:
The Proof of Concept (POC) was a pivotal starting point in our project, especially since it was our first endeavor into creating an AI app, particularly a Cloud AI app with Forge. We understood the significance of prioritizing the POC over UI design and other tasks as its success would pave the way for the project's development. Our focus for the POC was to keep it minimal and streamlined, concentrating on generating suggestions and timelines to guide users efficiently. Defining the criteria for AI's scheduling suggestions was crucial. We aimed for a schedule to be considered "acceptable" when approximately 80% of AI-generated suggestions were successful. To measure this, we had our developers conduct randomized tests to determine when prompts sent to the API returned successful results in at least 80% of cases.
We chose ChatGPT as our tool for its user-friendliness. This involved crafting a suitable prompt, identifying necessary data fields for collection, and keeping data collection simple by focusing on a single Jira project and gathering only essential data.
In summary, our POC validated several critical aspects:
Our Forge app connects to the Open AI API and receives valid responses.
Users can control the schedule and priority matrix.
We could proceed with the planned app requirements.
We maintained responsible AI practices, using minimal data and not training on customer or additional data, while still ensuring high-quality AI responses.
The POC laid a crucial foundation for our project, providing insights and direction for the development of an effective AI-driven task scheduling application.
Rustem Shiriiazdanov, Product Owner of Report Builder
In the beginning, our relationship with ChatGPT was one of pure users, as all of had some experience with it. Creating what we believed to be an effective prompt for API seemed straightforward. Our initial venture into the world of prompts started with web-based chats. Below is our first prompt with completely imaginable data:
"Easy," we thought. “This is it! Now, we can seamlessly integrate this into our application with API."
Our enthusiasm increased, so did our ambitions. In the following days, we experimented with various prompt combinations, each designed to bring us high-quality answers suitable for our context. We assessed the answers from experts/human perspective and found them good.
Typical prompts we tried were:
“Imagine that you are a developer, and you have several tasks to perform. In what order will you perform them: Could you create a work plan from it?“
„There is a list of tasks. Could you make a work plan based on it? Pay more attention to the priority of tasks. Please provide a detailed and rational justification for your prioritization“
„Fill in the Eisenhower matrix for the following scope of tasks. Add a detailed and rational explanation“
They looked so simple, and AI responded back with such nice results, so we naively assumed that a little fine-tuning would prepare them for production. However, the initial success of our prompt “engineering” was deceptive, and we soon faced many challenges we had to deal with!
For instance, ChatGPT regularly overlooked significant portions of our sourced issues. It tended to offer detailed responses for a few of the issues and then vaguely commented that the remaining issues were similar, without truly examining them.
Our solution? Establishing five concrete rules to avoid AI's tendency of making generalizations, whether for brevity, simplification, or other reasons, combined with a rule reminding AI about this is a real-life scenario.
From this, we realized ChatGPT-4, while advanced, often leans into generalizing outputs.
🤯 Lesson 1: Always make sure you test your input against generalization attempts.
Our second challenge lay in our optimistic belief that we could bypass OpenAI's function calling and stick to the well-known old-fashioned prompting. We naively assumed that simply instructing the model, like requesting a “JSON structure response,” would be sufficient. However, ChatGPT's adherence to the format was sporadic, often including excessive comments.
To remedy this, our immediate strategy was a two-fold approach: refine the rules to eliminate any non-JSON outputs and introduce a filter on our application end to retain only the JSON. The more sustainable solution? Embracing OpenAI's function calling.
đź“ť Lesson 2: Use tools and wrappers (external or AI vendor provided) that allow you to structure LLM output. Do not rely on a language model itself to provide symbol-to-symbol repeatable results for tasks where 100% consistency is required.
Our third discovery was more of a revelation: ChatGPT-4's surprising deficiency in calculating date differences. To combat this, we integrated due date calculations within our application, sharing only the resulting numbers with ChatGPT.
The lesson? 🎓 Lesson 3: Before consulting AI, always consider pre-processing your data with conventional tools. Do not ask AI to do something you could easily do with regular tools.
Lastly, we addressed occasional lapses in the AI's logic. For example, ChatGPT once suggested giving immediate priority to a low-priority task. Its reasoning? AI explained it this way: Placing the issue first in the schedule itself means it has the highest priority and must be completed today. In other words, AI didn't consider the task's priority by assessing the issue's attributes but assumed the issue's attributes by considering the task's random sequence.
Solution: A set of strict rules to assess issues' attributes step by step was implemented, ensuring that ChatGPT maintains the context of the assignment throughout the session.
AI may provide quick and insightful responses to challenging questions, but it sometimes fails to follow very simple logic. 🧠Lesson 4: It is important to remember that whatever instruction might be misinterpreted, will be misinterpreted, and you must secure your AI interaction with additional checks and rules to avoid that.
Except for these four lessons, we learned many other things, of course. And one of the most important lessons was that making a good request to AI requires not only knowledge of the domain you ask questions for, but also understanding how AI could interpret what you are saying. I would suggest that anyone who ever wonders why we need to be responsible with AI should try the exercise of making a good prompt - any doubts will be dispelled! Be responsible!
Lisa Hörig, UX / UI expert
Working on this project was like no other: We challenged each other to learn more and become better not only in our own profession, but also in expanding our skills.
In our usual roles, we know how to get the job done. Codegeist encouraged us to think outside the box and also try out other roles: I usually work as a UX designer but in cooperation with our Product Owner Rustem, I got even more insights than usual into his way of requirement engineering. We set some time apart to talk about all the different aspects of the project and requirement engineering from a very technical perspective. Later, we contributed to the design aspects together, and we all could watch our team vision coming to life each day.
This is especially true as we experimented with two different views. I contributed the first wireframe ideas but with Figma, we all got “back to the drawing board” quite quickly and design iterations were made by all the team members who wanted to try their chance at designing.
So in the end, Codegeist helped us to challenge ourselves but also to challenge each other. I’m glad that my team took their chance and also had a look at other professions, trying out skills, and I’m happy that I could teach them some things. Definitely, we built a nice product. But our biggest learning will remain with us for a long time: With a team with a lot of passion, you just need to have an eye on the goal and even if challenges occur, your team has your back.
Vladislav Komanchy, Senior Developer
As a Proof of Concept for our app, we aimed to explore the possibility of creating an application that could retrieve user issues and generate a work schedule to aid users in task scheduling. To realize this idea, we utilized the OpenAI API and developed a basic Forge app for testing.
During the development of the Forge app, we encountered no significant issues and had a smooth experience. We appreciated the capability to use Custom UI within the Forge framework. However, we quickly identified a challenge, which is also discussed in detail in this article.
This challenge revolved around the duration of processing 1 + n task prompts when interacting with the OpenAI API. For the prompts containing multiple issues that we sent to OpenAI, the processing time took at least 30 seconds.
In response to this issue, as detailed in the article mentioned above, we devised a solution using a facade pattern. We created an Express app that incorporates two endpoints: the first for job creation, where prompts are sent to AI, and the second for retrieving job results. Once the results are fetched, both the job and the result are automatically deleted. We utilized an in-memory database, RxDB, for job management. If a job remains unresolved (either not started or not retrieved by the user) for over an hour, it is automatically deleted from the system.
Search processSearch process pt. 2Storing previous data in Forge StorageGetting user issuesApp.js component’s render
The data derived from the OpenAI API requests were meticulously formatted and integrated into two key components of our application: the Calendar, implemented using react-big-calendar, and the Eisenhower matrix, developed using react-beautiful-dnd. The Calendar is specifically designed for users who employ estimations for their issues, while the Eisenhower matrix caters to other users.
Following the hackathon, I find myself increasingly interested in AI and ML. There is potential for exploring the development of our own model to address specific tasks.
I'm proud of our collaborative work in bringing this app to life with my colleagues. It has shown me that the OpenAI API can greatly accelerate app development, eliminating the need for custom model training.
Zhenya Elfimova, Product Owner of Timesheet Builder
I wanted to explore new horizons, specifically in graphic design tasks, as it was uncharted territory for me. Honestly, I had reservations and feared failure. So, why did I step up to the plate? I believed that with this team, I would never be alone. And I have to admit, with Lisa's support, I managed to do a pretty good job. There were no major technical challenges, and I easily grasped how the app, where I created icons and images, functioned. My main obstacle was self-doubt. This experience has inspired me to enhance my design skills, and I intend to take on more design tasks in the future. I don't want this newfound knowledge and skill to go to waste.
Patricia Modispacher, Marketing Manager
During the last Codegeist, I had the opportunity to write the texts for our submission, record the demo video, and handle various marketing tasks. The difference from this year is that back then, I only got involved once the project was already completed. This time, I had the chance to be there from the very beginning. As a marketer, I had never experienced such smart people brainstorming together and creating a proof of concept for an app. In fact, we had daily meetings to discuss the improvement!
For me, as a writer, these hypothetical thought processes seemed quite abstract, and I was relieved when our design queen, Lisa, presented the first mockups.
At the beginning of the project, Andreas was the project manager, but he went on vacation and handed it over to me. (That's okay, I'm not mad... but the next one's on you!) So I got an overview of deadlines and requirements, set up appointments, and prepared the marketing materials to the best of my ability while our devs were working on the code.
As the deadline approached, the pressure continued to mount. To submit our app, we needed to create a user guide and a demo video in a timely manner. It's challenging to make screenshots or a video without the app. Our development team was working at full throttle during those weeks. Not only in the evenings and at night, but also on weekends. (Looking at you, Vlad, Rustem, and who else?)
With a worried brow, I logged into our chat in the evenings and tried to persuade my colleagues to call it a day, even though they were keeping an eye on the deadline. And they kept on coding.
Despite the diligent work of many talented people, even a few hours before the deadline, the app was still not in a state where I could create a video. As the last person in the workflow chain, I felt the pressure enormously.
But this is not the story of a stressed-out marketing woman left to deal with her worries alone.
This is the story of a team that helps each other at all times and would rather work through the nights than miss the deadline or let one person down.
This is the story of people who spontaneously set aside all other tasks (sorry Nik, the release notes can wait!) and support and appreciate each other.
This is the story of people who have developed a solution for people.
A Jira AI Planner that helps achieve a better work-life balance in the long run and go through the day stress-free and happy.
And when we achieve this goal, it will all have been worth it.
While we were working on a stress-relief solution, we felt the stress firsthand once more. Nevertheless, we managed to pull through in the end. This achievement was not solely due to our dedicated team but also because of Forge. We are quite certain that without Forge, developing an AI app in such a short time would have been an enormous, nearly insurmountable challenge, especially considering none of us on the team had prior experience with AI app development.
With Forge, we clearly understood the framework we were using, the necessary patterns, and how to apply them. This allowed us to delve deep into exploring the AI aspect of the journey.
It was an enormous amount of fun to stay up all night with you, laugh, sweat, cry, and, in the end, create a solution that can assist many people.
Patricia Modispacher _appanvil_
Content Marketing Manager
appanvil
4 accepted answers
5 comments