My SO is a college educator facing the same issues - basically correcting ChatGPT essays and homework. Which is, beside, pointless also slow and expensive.
We put together some tooling to avoid the problem altogether - basically making the homework/assignment BEING the ChatGPT conversation.
In this way the teacher can simply "correct"/"verify" what mental model the student used to reach to a conclusion/solution.
With a grading that goes from zero point for "It basically copied the problem to another LLM, got a response, and copied back in our chat" to full points for "the student tried different routes - re-elaborate concepts, asked clarifying question, and finally expressed the correct mental model around the problem.
I would love to chat with more educators and see how this can be expanded and tested.
For moderately small classes I am happy to shoulder the pricing of the API.
Homework and home assignments are not really a way to grade students. It is mostly a way to force them to go through the materials by themselves and check their own understanding. If they do the exercises twice all the better.
(Also nowadays homework are almost all perfect scores)
Which is why LLM are so deleterious to students. They are basically robbing them of the thing that actually has value for them. Recalling information, re-elaborating those information, and apply new mental models.
Justin, as a friendly suggestion, double check your recruitment process.
I believe you guys are rejecting people based on keywords instead of reading CV.
I applied to a position and I believe to be a great fit, but in my CV you won't find the keywords that appears in the job description. Automatic rejection in no time.
Nothing wrong, but you are leaving a lot of talent to the table, and I was very hesitant in applying giving all the comments I see in your posts.
lol why fight it? just do the same thing…copy paste the website/company info & ur resume through gpt and make sure to use your new “prompt engineering” skills to run it through a bunch of times with prompts like “make me sound even more certified and a better fit for this company”. u have to be creative with your prompt though
Can definitely promise all resumes are reviewed by an actual human pending an applicant's answers on specific application questions (ie. do you require sponsorship [we aren't set up to accommodate visas at this time], are you willing to complete a background check, etc.) that would indeed cause an application to be auto-rejected.
In cases like this the author could just set up a filter like:
"If the email contains a task for me" (or some variation)
Then add a Gmail label to it.
In this way the author will immediately find all the actionable emails for him in a specific folder, much faster to skim and to keep track of all of them.
Another option would it be to have GabrielAI generate a Draft like "reply acknowledging the task and put a to-do date in the email in 1 week"
This would allow Google to track the email and the deadline.
author here. This definitely sounds all good. I'll be trying out your app. Funnily enough, when we were building this, friend pitched the idea you're doing (seems fantastic) on GabrielAI so we will be signing up for the beta.
A bit classless, but do you mind if I reach out to you about your experience building GabrielAI?
Sounds good. can't find an email on your homepage, but I sent one to the email on your privacy policy page. Might land in the spambox. Looking forward to it.
I loved the example with Gmail because I am the main hacker behind GabrielAI.
Basically smart filters for gmails and Outlook.
You got to express in plain English which email to filter on and how to create a DRAFT and the tool automatically filters your email and generates drafts or add labels.
It was born out of frustration of replying to several emails, all the same, with content that was already available online. Still need to provide a human and technical-ish touch.
I think Twitter provides a case study here. The site is still working fine despite being staffed by only 20% of the employees it had just over a year ago.
Yea, I think we still need to wait and see. Airplanes keep gliding when you turn the engine off. I think Twitter is still in the "Wile-E-Coyote ran off the cliff but hasn't started falling yet" phase.
I take a middle position on this having seen it all at Twitter since 2006. The site does still work, but not without outages. Of course, Twitter had major outages during the Williams/Costello/Dorsey period as well. What happened was that when Musk obtained control, the haters claimed that Twitter/X would burn down or get bricked. None of that ever happened, which leads some people to believe it "still works fine" with 20% of the former staff.
Fractional CTO, looking up at your tech strategy to avoid costly and possibly fatal mistake in your young venture.
Helping and leading younger engineers to build a product with an eye over quality, speed of delivering and business outcomes.
I don't compete on price, but I have been told before that you would get a lot of quality.
Beside the technical approach I make sure that the future leaders in your team understand why the business is important, how to speed up delivery, where to cut corners and when to focus on quality.
I help shaping engineering cultures.
Focus mostly on Backend and experience with LLM and Generative AI applications.
I understand that each small start-up is a bit different and that each required and need a different approach that work for them.
Previous experience in Start-ups, Scale up, and Mega corps.
3 year plan. Basically for colos you also get it cheaper if you sign for 3y so that’s more apples to apples. Equipment “annual” cost was calculated over 5-7 year lifetime - I didnt go as far as calculating how much you could recover if you pawned it after 3 years…
My SO is a college educator facing the same issues - basically correcting ChatGPT essays and homework. Which is, beside, pointless also slow and expensive.
We put together some tooling to avoid the problem altogether - basically making the homework/assignment BEING the ChatGPT conversation.
In this way the teacher can simply "correct"/"verify" what mental model the student used to reach to a conclusion/solution.
With a grading that goes from zero point for "It basically copied the problem to another LLM, got a response, and copied back in our chat" to full points for "the student tried different routes - re-elaborate concepts, asked clarifying question, and finally expressed the correct mental model around the problem.
I would love to chat with more educators and see how this can be expanded and tested.
For moderately small classes I am happy to shoulder the pricing of the API.