Join our newsletters for the latest updates and exclusive content on industry-leading AI developments.
Matt Calkins, the cofounder and CEO of Appian, a top provider of low-code automation solutions, has challenged the AI industry. This week, Calkins introduced new guidelines aimed at fostering responsible AI development and building trust between AI providers and their customers. His message is timely, as concerns about data privacy, intellectual property rights, and the rapid pace of AI advancements are at an all-time high.
Calkins addressed these issues not as a skeptic but as an advocate for the flourishing of AI. He criticized the current approach to AI regulation, pointing out that it overlooks vital issues like data provenance and fair use. Recent statements from the White House and Senator Schumer were cited as examples of this oversight.
Big tech companies, according to Calkins, avoid discussing data provenance or fair use. This has resulted in a gray area where tech firms operate unchecked, while the rest of the industry and potential AI users watch in dismay, calling for reasonable rules.
Appian’s proposed guidelines aim to address these issues directly. They include four main principles: disclosing data sources, using private data only with consent and compensation, anonymizing and obtaining permission for personally identifiable data, and ensuring consent and compensation for copyrighted information. Calkins believes these measures will build trust between AI providers and users, making the technology more relevant to individual users and organizations.
Calkins envisions the future of AI as a race for trust, not just data. By building trust with users, AI systems can access more valuable personal data, unlocking greater potential than the current model of indiscriminate data consumption. However, achieving this requires AI providers to adopt responsible development practices and prioritize user privacy and consent.
As a leader in low-code automation solutions, Appian stands to benefit from this shift towards trustworthy AI. Their platform allows organizations to quickly develop and deploy AI-powered applications while maintaining strict data privacy and security. Appian’s commitment to responsible AI development could give it a competitive edge as more companies seek AI solutions that prioritize user trust.
Calkins’ announcement comes at a time when the AI industry faces growing scrutiny from regulators, lawmakers, and the public. Concerns about job displacement, algorithmic bias, and the misuse of AI technology are increasing. By proposing these guidelines, Calkins aims to address these issues and position Appian as a leader in responsible AI.
Although Calkins has yet to secure partners for his guidelines, he remains hopeful about their impact. He is reaching out to see who will join him in this initiative, believing that if the guidelines are simple enough, they will gain support.
The stakes for the AI industry are high. Calkins argues that the industry has “maxed out phase one” of data consumption, and the next phase will be defined by trust. Companies that build trust with users and commit to responsible AI development will thrive in this new era.
Calkins’ guidelines offer a path for the industry to transition. By emphasizing transparency, user consent, and respect for intellectual property, AI providers can build the trust needed to fully realize this technology’s potential. The question is whether the industry will follow Appian’s example.
As the AI race evolves, the winners will be those who build not just powerful algorithms, but trustworthy ones. With his vision and commitment to responsible development, Matt Calkins positions Appian at the forefront of this movement, setting an example for the rest of the industry.