Central to Packback’s discussion platform is the Curiosity Score – which is based on students’ communication, credibility, and curiosity, infused in their discussion posts. This feedback is provided in real-time to students, as a virtual coach that guides them to write higher quality posts.
To learn more about the Curiosity Score, we sat down with Craig Booth, Packback’s Chief Technology Officer. He explained how our team developed Packback’s Curiosity Score, and why it’s so unique (and so uniquely human) in a world that is increasingly utilizing AI, especially in education.
What goal did Packback set out to accomplish with the Curiosity Score?
The goal of the Curiosity Score is to be a measure of effort. The general idea of using a complex AI for grading student work is a little scary, which can have a real impact on their academic and career trajectory. On the other hand, the Curiosity Score is a transparent and explainable measure of student effort.
The problem with a lot of AI and machine learning is that tons of data go into them, results are spit out, and none of it can really be explained in human terms. This has huge relevance to bias and ethics in AI – if it isn’t straightforward we cannot unpack those prejudices.
By design, the Curiosity Score cannot have this problem, because it is simple and understandable. It also incentivizes positive behaviors – so even the students who may try to game the system will ultimately just write more and cite sources in their discussion post… which is what we want! Traditional platforms do not address that at all.
What role did equity play in the design process of the Curiosity Score AI?
Equity was a big part of the design process of the Curiosity Score. The Curiosity Score being simple, rules-based, and a measure of effort takes out a lot of the common equity issues of AI.
For example, we are often asked if the Curiosity Score grades someone on their grammar, and the answer is no. We asked ourselves – do we want to penalize students who may speak English as a second language or come from a variety of backgrounds? We want it to be a measure of effort, which is not solely expressed through correct grammar.
We are also looking forward to a world where our AI assists students in hitting a minimum standard. The routine stuff like word counts, citing reliable sources, and in the right contexts bringing grammar up to par. These things should be a given, which frees up the professor for doing what is truly human: assessing students’ work critically. We want professors to be able to focus on the higher level of Bloom’s Taxonomy, where students apply and truly understand the content.
What are some of the dangers of AI in the classroom? How can these be avoided and how does Packback actively work against them?
The big issue with AI in the classroom is the issue of bias and ethics in data. Many companies are using AI which means they get a huge pile of data, create a model from that data, and then the model makes predictions. Unfortunately, the quality of that model is directly dependent on the information that was plugged into it.
There is no truly unbiased or objectively correct data set. Every one of these models is trained by a set of data that a human being made decisions about, so naturally, it is very easy for bias to creep into machine learning applications.
The Curiosity Score itself, the number or “grade” students are getting, is not biased in any way. It is objective and transparent. Then, the real-time feedback students receive as they write is friendly advice to guide them to a higher Curiosity Score. We make the “grading” part transparent, and the advice part accurate and actionable.
How do tools like the Curiosity Score play a role in empowering students? How does that impact their learning experience?
The Curiosity Score is straightforward in what it wants from students, so students know what is being asked of them. It provides a clear window into how they’re performing in the discussion and participation parts of their courses. It takes a very fuzzy thing – like in a traditional LMS discussion board students just sign on and write without guidance – and Packback’s Curiosity Score is concrete.
It gives feedback in real-time and ensures students are meeting the expectations for the course. Having a clear and transparent rubric for how you will be assessed is the first step in empowering students to grow to meet or exceed standards.
What is the responsibility of EdTech in ensuring that students are able to express their curiosity?
I think there is a responsibility when we have been invited into a student’s educational experience. Unfortunately, there are a lot of things that have become a part of education, such as the focus on standardized testing, which has driven a lot of curiosity out of learning. We want to do anything we can to remedy that while making students into critical thinkers.
It’s easy in our education system to lose sight of curiosity, but anything we can do that helps spark curiosity in students is good for the world. I believe we have a duty to do that and we are always working to support that goal.
Packback’s CTO, Dr. Craig Booth, has deep expertise in AI, machine learning, R&D, and software development best practices, which were honed over the past 14 years in academic research and engineering leadership. Craig is a widely published theoretical astrophysicist and obsessive bread baker. He is also very funny, and friendly.