Can you trust AI?
Updated: 7 days ago
Here's a story about AI and trust.
When we first started UrbanForm, we talked a lot about how we use AI (artificial intelligence).
The response we most often received was not what we expected or wanted.
To the people who mean the most to UrbanForm, our customers, it was clear that AI was not something they could *yet* trust.
Since OpenAI's ChatGPT came out, has that changed?
Well, this has been our experience:
When OpenAI's API was released, we were excited to quickly build it into our code base. We believed it had the potential to streamline or eliminate much of our hard-written code.
So we started testing it immediately.
And what we found was that it was wrong. Too often. Too much.
We could not rely upon it nor trust it to the degree that we knew was necessary.
Trust comes first
When talking about zoning with potential customers, the first question we receive, before OR after ChatGPT, was/is: how can we trust the zoning information UrbanForm provides?
Addressing the issue of trust first and foremost has been one of our biggest lessons.
And thus, one of our biggest assets.
This is why we say that the trust we've gained from our users is the single biggest accomplishment we've yet achieved.
Our technology is only the means to that end.
If you want to dive into the intricacies of the type and scale of AI we use, how we developed it, deploy it, train it; its scope and limitations, we can talk about that.
But it's not important or relevant to most people. Certainly not most of our customers.
What we *do* spend a lot of time talking about is the process we go through and the steps we take to ensure that the zoning information is something our customers can trust.
Proof in the usage
We love that the proof of UrbanForm's value is simply the use and trust of it by our customers: architects, developers, and building professionals for whom the usefulness of the zoning information we provide is what their livelihood depends upon.
It's technology for better buildings and cities :)