Create
cancel
Showing results for 
Search instead for 
Did you mean: 
Sign up Log in

Anything exciting about how ChatGPT is helping you out?

ragni_chawla April 16, 2023

#openai

4 comments

Comment

Log in or Sign up to comment
Darryl Lee
Community Leader
Community Leader
Community Leaders are connectors, ambassadors, and mentors. On the online community, they serve as thought leaders, product experts, and moderators.
April 16, 2023

Mainly it hasn't. Unfortunately more frequently I've seen evidence of what I suspect is ChatGPT used to post answers here on the Community.

Unfortunately some of those answers are straight-up wrong.

What's interesting to me is that some of the answers posted include things that seem completely reasonable (Jira Automation should have a Lookup User function!).

ChatGPT works by giving the response that the requestor wants to hear. That's different than the CORRECT response.

I found this article that seems to cover the issue pretty nicely:

Like # people like this
Darryl Lee
Community Leader
Community Leader
Community Leaders are connectors, ambassadors, and mentors. On the online community, they serve as thought leaders, product experts, and moderators.
April 16, 2023

@Nic Brough -Adaptavist- wrote a far more detailed response his company's testing, as well as other ChatGPT-generated answers found on Community:

Adaptavist fed it 250-ish questions on Jira and Confluence, ranging from simple "how do I" to quite over-clarified questions.  We had a Community lead hook it up to Community and we deleted all 150 answers and comments.

Of approximately 400 questions, we saw a lot of well-written and clear answers that were utterly howlingly wrong.  There was only one that was right and we had to rewrite the question 4 times before it got it.  It lies, it extracts snippets of documentation and glues them together for the wrong thing, it writes code referring to libraries and function calls that don't exist, it certainly doesn't understand that Jira, Confluence and Bitbucket are separate products, nor that there are Cloud, Server, and DC variants.

Like # people like this
Nic Brough -Adaptavist-
Community Leader
Community Leader
Community Leaders are connectors, ambassadors, and mentors. On the online community, they serve as thought leaders, product experts, and moderators.
April 16, 2023

>ChatGPT works by giving the response that the requestor wants to hear. That's different than the CORRECT response.

Oh yes, that!  I'm going to borrow that a lot...

Like # people like this
Darryl Lee
Community Leader
Community Leader
Community Leaders are connectors, ambassadors, and mentors. On the online community, they serve as thought leaders, product experts, and moderators.
April 18, 2023

I found where I stole it from:

https://social.coop/@DrewKadel/110154048390452046

Something that seems fundamental to me about ChatGPT, which gets lost over and over again:

When you enter text into it, you're asking "What would a response to this sound like?"

If you put in a scientific question, and it comes back with a response citing a non-existent paper with a plausible title, using a real journal name and an author name who's written things related to your question, it's not being tricky or telling lies or doing anything at all surprising! This is what a response to that question would sound like! It did the thing!

But people keep wanting the "say something that sounds like an answer" machine to be doing something else, and believing it *is* doing something else.

It's good at generating things that sound like responses to being told it was wrong, so people think that it's engaging in introspection or looking up more information or something, but it's not, it's only, ever, saying something that sounds like the next bit of the conversation.

Like # people like this
John S Davidson April 16, 2023

A side note to this, and my apologies if I'm preaching to the choir.  There is often a working assumption that AI/Bots are a silver bullet.  When the general public gets wind of a new bot or AI solution, it is really the start of the journey towards usefulness.

At this stage, realistically only the development entity and some Beta testers have used the Bot/AI, which makes it a rather closed environment for learning.

That's why its important to feed back.  Give a thumbs up or down, and in the case of Chat GPT, try regenerating the response.  This feeds back that the results are/are not valid and allows refinement and learning.

This:

"Adaptavist fed it 250-ish questions on Jira and Confluence, ranging from simple "how do I" to quite over-clarified questions.  We had a Community lead hook it up to Community and we deleted all 150 answers and comments.

Of approximately 400 questions, we saw a lot of well-written and clear answers that were utterly howlingly wrong.  There was only one that was right and we had to rewrite the question 4 times before it got it."

is a wonderful exercise:

  • "Play" with the system
  • Review the responses (and feed back!)
  • Retry, to give the AI a chance to correct itself. (and feed back!!!)

I put simple queries in and feed back.  If I know a slightly better response than the system, I ask the question in a more precise way and then feed back on that.  Teach the system, its in all our interests.  Sure, this is a small sample and it will take an age to learn, but the more doing this, the quicker and better the learning process will be.

From a purely selfish POV; if you have created a product, its in your interest to have the Bots/AI correctly support your system.  That makes it easier for people to use and that will encourage them to use it.

Like # people like this
Nic Brough -Adaptavist-
Community Leader
Community Leader
Community Leaders are connectors, ambassadors, and mentors. On the online community, they serve as thought leaders, product experts, and moderators.
April 16, 2023

Yep, the bit later on in that post was my conclusion that "it was utter garbage because it had not been trained on the right things", not that it couldn't do it.

Like # people like this
John S Davidson April 16, 2023

Definitely.  And my response was NOT a dig.  Its very interesting and worrying that the responses look so "real" whilst being so wrong.

I wonder if OpenAI will be open to working with Product teams to improve those specific responses?

Perhaps an extra feedback option for the correct response if you're an approved representative from the Product organisation.

Like # people like this
Nic Brough -Adaptavist-
Community Leader
Community Leader
Community Leaders are connectors, ambassadors, and mentors. On the online community, they serve as thought leaders, product experts, and moderators.
April 16, 2023

I certainly didn't see it as a "dig", it was a good load of further thought!  My comment was just to say "I'd thought of a bit of what you were talking about".

There's another thread where @Darryl Lee makes an excellent point - ChatGPT tends to give you the answer you want to hear.  Which is usually not the correct answer.

Like # people like this
Darryl Lee
Community Leader
Community Leader
Community Leaders are connectors, ambassadors, and mentors. On the online community, they serve as thought leaders, product experts, and moderators.
April 16, 2023

So it's our job (or Atlassian's) to spend our time training ChatGPT?

Who's gonna pay me or them? The fine folks at OpenAI?

Humph.

From a purely selfish POV; if you have created a product, its in your interest to have the Bots/AI correctly support your system.  That makes it easier for people to use and that will encourage them to use it.

I'd say it's on OpenAI's interest to do this work. Atlassian's? Mehhhh.

Why would Atlassian want people to use a tool that is going to give wrong answers unless you devote time and resources to "training it"?

Wouldn't Atlassian's time and money be better spent writing proper documentation? Answering support tickets from paying customers? Sending swag to their volunteer community that answers a lot questions for free?

I spend time doing that here because (I hope) it is a place known for providing correct answers, and I find it intellectually challenging and rewarding to do so. I also find useful information in this Community myself (although I do with search was a little better).

It's hard for me to see the value of me instead spending my time "fixing" ChatGPT's errors. 

ALSO: If people who need correct answers are going to ChatGPT, how will they even know if ChatGPT's response is wrong enough so that they can provide feedback? Ugh.

User: Hey, can you please tell me how to write automation that does X?

ChatGPT: Sure, do steps 1, 2, and 3.

User: Um, when I tried step 2, I can't find that function.

ChatGPT: Oh, well maybe you can try alternate step 2 ...

User: That doesn't exist either. You suck.

Like # people like this
John S Davidson April 16, 2023

Job to do it?  No, certainly not.  But it is an advantage to have it stronger for the products.  I'm not talking about dedicating a team to continuously train the AI.

Its also the longer term view.  I agree its worth Atlassian's time answering support tickets from paying customers, but if the community (including an educated AI) can answer instead, isn't that saving Atlassian support time and therefore money?  Essentially its like implementing a Bot without paying for and managing the Bot.

And to be clear, I was not specifically talking about Atlassian, or about Chat GPT.  Rather, the approach and reception of Bots and AI when they first go out to the wild.

The last point (unless I've misread) looks to reinforce my point about the experts gaming the system and feeding back.  And if the lay-user can't find the function in step 2 they can feed back with a Thumbs Down.  Then its on OpenAI too see why there are so many Thumbs Down on a certain topic.

Taking the wider view that AI/Bots are going to be the preferred/mandated first point of contact for queries (n.b. latest ChatGPT incorporation into Bing), similar to how the general public go to a search engine today, this is coming, so its an advantage to swim with the tide.

Not everyone wants to join a Community.  Sometimes they just want a quick answer.

Like # people like this
Nic Brough -Adaptavist-
Community Leader
Community Leader
Community Leaders are connectors, ambassadors, and mentors. On the online community, they serve as thought leaders, product experts, and moderators.
April 24, 2023

I think it's up to the people choosing to use a 'bot to understand that they are talking to a bot, and hence they need to accept it's going to get it horribly wrong, potentially a lot.

Like Craig Nodwell likes this
Darryl Lee
Community Leader
Community Leader
Community Leaders are connectors, ambassadors, and mentors. On the online community, they serve as thought leaders, product experts, and moderators.
June 21, 2023

So talking about properly training AI before it can be useful, this article that came out yesterday has some great insights on that:

tl;dr - even after the "model" has been built and refined, these algorithms still need care and feeding by actual humans to label/categorize/properly answer things. Ironically things like Mechanical Turk paved the way for farming out a lot of this work.

Some choice quotes:

Put another way, ChatGPT seems so human because it was trained by an AI that was mimicking humans who were rating an AI that was mimicking humans who were pretending to be a better version of an AI that was trained on human writing.

OMG OMG and THIS:

Another Kenyan annotator said that after his account got suspended for mysterious reasons, he decided to stop playing by the rules. Now, he runs multiple accounts in multiple countries, tasking wherever the pay is best. He works fast and gets high marks for quality, he said, thanks to ChatGPT. The bot is wonderful, he said, letting him speed through $10 tasks in a matter of minutes. When we spoke, he was having it rate another chatbot’s responses according to seven different criteria, one AI training the other.

That's right. AIs trained on data from another AI. Yeah um, this is going to turn out just great. BUT WAIT!

Lately, the best-paying work is in the U.S. In May, Scale started listing annotation jobs on its own website, soliciting people with experience in practically every field AI is predicted to conquer. There were listings for AI trainers with expertise in health coaching, human resources, finance, economics, data science, programming, computer science, chemistry, biology, accounting, taxes, nutrition, physics, travel, K-12 education, sports journalism, and self-help. You can make $45 an hour teaching robots law or make $25 an hour teaching them poetry. There were also listings for people with security clearance, presumably to help train military AI. Scale recently launched a defense-oriented language model called Donovan, which Wang called “ammunition in the AI war,” and won a contract to work on the Army’s robotic-combat-vehicle program.

So I'm sure some of you all remember anecdotes like an American engineering having to train cheaper overseas workers on what he did, only to later lose his job to those workers?

Welp. I guess if I really wanted to teach a robot about Atlassian Admin for money (and to put myself out of a job), maybe I need to keep an eye on the listings at this scale.ai place. Alas - looks like they're just looking for lawyers and foreign language translators. Lawyers make $45/hr, I doubt they'll pay Atlassian Admins more than that. (Translators make $15/hr.)

Like # people like this
Darryl Lee
Community Leader
Community Leader
Community Leaders are connectors, ambassadors, and mentors. On the online community, they serve as thought leaders, product experts, and moderators.
April 23, 2023

Just read an interesting take including how one professor used ChatGPT take on the chore of writing recommendation letters:

“A dirty secret of academe is that most professors have a cache of letters separated into different categories,” says Matt Huculak, another AI-using academic and the head of advanced research services at the University of Victoria libraries. They’ll typically have folders full of excellent, good, and average ones, which can be adjusted and repurposed as appropriate. But Huculak wondered if AI might help break that chain, especially for top students. So he asked ChatGPT to write an “excellent” reference letter, and then, instead of using it as a template, he treated it as an enemy. He opened the ChatGPT output in one window and tried to compose the very opposite of what he saw: an anti-formulaic recommendation letter. “What I wrote ended up feeling like the most ‘human’ and heartfelt letter I’ve written in a long time,” he told me. The student won a prestigious scholarship at Cambridge.

Which I suppose is what some Community contributors have done for me and Nic in their usage of ChatGPT here. Given us the opportunity to say "Uh, no, that's wrong." And then write the correct answer.

(One of the other professors describes rubber-ducking without calling it rubber-ducking, heh.)

When she began developing a syllabus for a new class, she asked ChatGPT to generate ideas, “kind of like a rubber duck that talks back.”

But at least in my particular cases, the AI answers provided nothing new. No kernel of inspiration. They were just wrong. But in some cases, debunking them (making my do a lookup to make sure there wasn't some new feature in Jira Automation I was't yet aware of), wasted my time.

Talking to some folks at Glean at Team last week, their promised AI-powered improvements would only look at your company's Confluence/Jira/whatever content you point it at to provide its answers.

suppose that if an LML was trained on accepted answers here in the Community, then it might have a fighting chance of getting answers right.

At the same time, I'd be happier still if resources were trained on applying AI to improving SEARCH so that rather than "making up answers", it can just direct people to the original source. Correct answers by humans.

Like # people like this
Adam Ahmed June 14, 2023

Fwiw, I have been getting a ton of value out of AI. Github Copilot is _absolutely phenomenal_ at autocomplete. I might not remember what I called that function in the other file, but you can be sure Copilot knows! The only problem I have with it actually has nothing to do with the AI and everything to do with the overriding of VS Code Intellisense. When Copilot _isn't_ what I want, it takes extra keystrokes to bring up the Intellisense options.

Meanwhile, I've also had great success with ChatGPT. Not because it is a good coder! It's a terrible coder! But because it is great and solving the "blank page" problem. Once I needed to do a nontrivial array transformation and didn't know how to start. I asked ChatGPT for an answer and it gave me terrible, broken code that was O(n^2) in space and time. But it was a start! I saved a good 30 minutes of figuring out a high-level approach by seeing what ChatGPT did. And then I just had to make it work. :)

Of course then I wasted the 30min having a fun conversation with it about Big O (it thought it had a better algorithm, but it didn't!)

Anyway - AI as a tool is very much the future. While there are certainly gotchas to be wary of (hallucination, copyright risk, etc), you should not be passing it up as a tool to be more productive!

Like Craig Nodwell likes this
Nic Brough -Adaptavist-
Community Leader
Community Leader
Community Leaders are connectors, ambassadors, and mentors. On the online community, they serve as thought leaders, product experts, and moderators.
June 16, 2023

So have you tested what it has told you?

Like Craig Nodwell likes this
Darryl Lee
Community Leader
Community Leader
Community Leaders are connectors, ambassadors, and mentors. On the online community, they serve as thought leaders, product experts, and moderators.
June 16, 2023

From what I read, @Adam Ahmed already had the knowledge to know that ChatGPT's solution was "terrible":

it gave me terrible, broken code that was O(n^2) in space and time. But it was a start!

Which reminded me of an article I just read:

tl;dr: Instructors at Harvard ask students to do an assignment. 1 hour in, the instructors present what ChatGPT came up with. ChatGPT's answers were boring. This prompted students to look at their answers and say "Oh crap, we're no better than ChatGPT", and forced them to think of something better.

So yeah, very much along the same lines as that previous article I mentioned.

Those use cases, along with what @Adam Ahmed present, assume that the users already has sufficient expertise in the topic at hand that they can use to critically assess what ChatGPT vomits forth.

Many of us in the Community encounter users who probably do not yet have this expertise in Jira/Confluence/etc., which is why they're coming here.

So at least in this particular arena, ChatGPT is not seen as an unwelcome competitor, but rather a willfully ignorant distractor that in fact may lead users to do damage to their systems, or at worse, frustrate them with misinformation, and perhaps lead them to the conclusion that Atlassian products suck because they don't have the hallucinated features that ChatGPT has been known to pull out of its mechanical ass.

SO then. Sure. Brainstorming tool? Great? Rubber-duck to get your brain flowing? Probably preferable to me doing it to you on Slack. But as a more broadly used tool presented to end-users as an "all knowing authority". Gaaah, no.

Like Craig Nodwell likes this
Adam Ahmed June 16, 2023

Yeah you definitely nailed my experience. It fails any testing. But it gets me to the goal much faster than if I started from scratch.

My cofounder wrote about an experience we had where AI was used as the final product. Needless to say it lied to customers. I wouldn't use today's AI in that way. (but maybe someday!)

Like Craig Nodwell likes this
Nic Brough -Adaptavist-
Community Leader
Community Leader
Community Leaders are connectors, ambassadors, and mentors. On the online community, they serve as thought leaders, product experts, and moderators.
June 17, 2023

I think what you're saying here is that the AI's are broadly useless on their own, but can get a human started on something.  

I'm very much on Jen's side when they talk about how bad it can be, and how it needs to be checked by something that understands the subject properly.

Like # people like this
Adam Ahmed June 17, 2023

This is the Way. 😄

Like Craig Nodwell likes this
TAGS
AUG Leaders

Atlassian Community Events