Cyber security, AI and quantum computing are priorities for Veracity Trust Network
Cyber security, AI and quantum technologies are hot topics for Veracity Trust Network.
The company been busy in recent weeks – attending events in Belfast and hosting a roundtable in Leeds – on the subjects.
With the AI market projected to reach a staggering $407 billion by 2027, the value for brands in being both cyber aware and knowing about the potential of quantum technology in the future, is immense.
Veracity’s CTO Stewart Boutcher and Data Scientist Reuben Sodhi headed over to Belfast in July for a programme of events at Queen’s University Belfast’s Centre for Secure Information Technology.
The event was organised by the National Cyber Security Centre (NCSC) For Startups, a unique initiative that offers startups insights and guidance from experts, enabling them to develop, adapt or pilot technology to meet the biggest cyber security challenges facing the UK.
Taking part were industry, Government, and academic representatives, with the aim of developing greater collaboration in the field of cyber security.
Then at the end of the month, Veracity hosted a roundtable in partnership with Leeds based SMMA agency, Soar With Us, at Platform Bruntwood SciTech on “What businesses need to know about AI, Quantum and Cybersecurity – the opportunities & dangers”.
Speakers included VTN’s Stewart Boutcher and Reuben Sodhi, as well as Enzai founder Ryan Donnelly, Joe Marston from Soar With Us, K9 Nation founder Becky Baker, and Codurance’s Angela Channer.
Also present for the roundtable were Jason Crispin, from IMAGINaiTION, Sherin Mathew, founder and CEO of AI Tech, Tom Wilson, Solution Architect at data and analytics strategy consultancy Cynozure, and Ihsen Alouani, from Queens University Belfast.
Stewart Boutcher, CTO at Veracity Trust Network, said: “I am so pleased that we could put on two events of this type in quick succession, drawing together experts from across cybersecurity & AI clusters Northern Ireland and West Yorkshire to share knowledge in one space.
“I, and the team at Veracity Trust Network, are committed to taking leadership roles in the thinking around how we can continue to protect organisations from cybersecurity threats now and in the future; events like this are vital to assist in connecting disparate silos, for the benefit of all. Thank you to the team at Veracity and our partners across the regions!”
Cybersecurity, AI & Quantum Computing
The main topics up for discussion were cybersecurity, AI and Quantum Computing, and how each of those were being used and developed in Yorkshire and the North.
One of the most important discussion topics was the potential threat from Quantum Computing, and how the US’s Department of Homeland Security is currently concerned that hackers are stealing data today so quantum computers can crack it within the next decade.
Stewart quoted former US Secretary of Homeland Security, Alejandro Mayorkas: “The transition to post-quantum encryption algorithms is as much dependent on the development of such algorithms as it is on their adoption.
“While the former is already ongoing, planning for the latter remains in its infancy. We must prepare for it now to protect the confidentiality of data that already exists today and remains sensitive in the future.” when discussing what was happening with AI and how rapid detection is key to the detection and prevention or mitigation of breach attempts at the earliest opportunity.
He then invited Veracity’s Data Scientist Reuben Sodhi to explain what the company was currently was doing with AI for mitigating against fraud in digital marketing and advertising.
“We see more sophisticated methods to create bots which look more like humans and the problem is, if you have static algorithmic style rules, you have to change each one for each new bot.
“And that’s not feasible as we’re seeing new types of bots all the time. So, the justification for using AI in bot detection is that it provides a much more dynamic approach.”
He also talked about the importance of governance and how the current stage of complexity and the scale of AI, meant it was vitally important to be accountable for the actions any AI programs were taking.
“It’s important not only for customers, but for stakeholders. If a client asks for details on how a particular AI output was reached, you have to be able to go back through the data and see the exact process it took.
AI Governance
Ryan Donnelly, found of Enzai, compared the development in the Quantum space to that of the film Oppenheimer, which examines the development of the atomic bomb, and said: “Let’s build it and worry about consequences later, and I feel we’re seeing a similar ‘charge in’ with the quantum space and it is only a matter of time before it will change the world upside down.”
He then went on to discuss AI governance and how, in technological spheres, there’s a tendency to “move fast and break things” and how algorithms already can break in the real world if you don’t keep responsibility for them.
“That’s what governance is all about. It’s making sure that the relevant stakeholders are brought around a table to consider all the issues and to make sure you have an auditable log of all decisions that went into drawing or building the algorithm, whatever that may be,” he added.
Taking these steps, and making sure to register and record all developments, ensures AI moves forward in the right direct, not the wrong one, he said.
Ryan also touched briefly on the EU AI Act, which should come into force next year and which aims to enact ‘horizontal regulation’ for AI usage, and said he believed there would be legislation in place in the UK before long because AI technology was too powerful to be left without it.
Ethics in AI
Sherin Mathew, founder and CEO of AI Tech, followed on with a discussion about the ethics involved in AI. He said governance was one of the actions you should take to be ethical and went on to explain how he was involved in the Government’s current White Paper consultation on AI regulation.
He added: “This is where there can be problems, when it comes to governance, it needs to be unbiased, and it should be almost like a moral code that we want to put in place. When governance isn’t defined, then key stakeholders aren’t always invited into the discussions or to help make decisions and that’s when conflicts can come in.”
Sherin went on to say his approach was to use a “top down” view where if your business or your idea has an ethical core, then your ultimate intentions and the outcome for stakeholders will be ethical.
He was followed by Tom Wilson, Solution Architect at data and analytics strategy consultancy Cynozure, who said that within the industry as a whole, the journey with AI is still in its infancy.
“Of the companies thinking about data, AI machine learning, and the stack, around 80% of them will never make it to implementation but part of the work we’re doing with Veracity Trust Network is in making sure that the model and the set of rules has a framework that can still maintain its governance. This is so, if you need to go back, you can see the data it was trained on, you can go back and see the visibility of the changes etc.”
At the early stages, Tom said companies weren’t necessarily thinking about ethics, governance, sets of rules for AI and ML, because they’re more interested in what it can do for them but also because there isn’t a visible set of rules.
Within academia, Ihsen Alouani, from Queens University Belfast, spoke about how researchers were struggling to keep up with how fast applications for AI, especially within security, privacy etc, were changing and developing.
“From an ethics perspective, it should not be an afterthought, it should be integrated into the thought processes of whoever is dealing with any kind of machine learning. Especially when it includes peoples’ personal data,” he added.
AI for small business
Becky Baker, owner of K9 Nation, an app which connects dog owners and lets them share useful information, spoke about how her business was just looking into AI solutions and how they could benefit her.
“I’m lucky, I’m in this room learning about top-down ethics and governance, but there’s probably lots of small businesses out there who don’t know where to find information, so they might implement a solution which doesn’t follow ethical practice or governance, but it won’t be done maliciously. How do they find out how to do these things properly?
“It’s the same as with GDPR, how long did it take businesses to implement that?” she added.
Ryan added that there were already heavy regulations in some spaces, specifically surrounding financial use of AI, so that hopefully there would be procedures in place that would protect against people trying to make a “quick buck”.
Jason Crispin, from IMAGINaiTION, an AI powered app which generate stories for children, spoke about how his company was making sure they do the work first and create a product that is ethical and followed good governance.
“We’re looking at the extra rules we have to put in place to help deal with generative AI which doesn’t do what it says it should do,” he added.
Joe Marston, founder of Leeds-based E-commerce growth agency Soar with Us, spoke about how transparency was vital when using AI.
“We use AI across the board, from service delivery. Being transparent is the first step for being ethical, we’re producing video service for brands, but we can’t always find speakers for the European market, so we’re asking if we can use AI voiceover to generate the speaker for German language etc.
”We’re using it in every part of the agency, from blogs, SEO, newsletters. We’re using it to analyse data and generate strategy off the back of that. It’s across the board, we’re looking to build our own internal software,” he added.
AI versus software
Both Stewart and Angela Channer, Principal Software Craftsperson at Codurance, spoke about the art or craft of creating software.
“I think software is an art as much as it is anything else and there is a process around it, and I’m very concerned about using AI to shortcut software development, as it turns software into a black box, that no-one understands properly,” said Stewart.
Angela added: “The thing we’ve found with these tools is that they can be used in collaboration with humans and, as long as there is human intervention, and you’re using AI for a specific purpose, like using it to summarise something, then there is a quite useful use for it.”
She went on to say that there seemed to be a current trend of trying to make an AI tool designed for one specific purpose into a “jack of all tools” and that’s where issues arise and, if you’re imputing data into an AI tool, you don’t know where that information is going to end up.
Stewart queried whether the use of certain AI tools could even be ethical in the first place simply because of the way the dataset had been created.
In Conclusion
The roundtable also talked about how the information should get out about the ethics of using AI, where should education start on AI and how to use it properly?
Angela added: “As an industry, we should be sharing our knowledge and allowing companies to come and have these conversations and running roundtables, having this dialogue, it helps people learn the pain points and share what’s been learnt so it doesn’t end up going very wrong.”
Jason said he was concerned that, as a nation, there was a risk of losing creatives because of AI bots and as an edtech company, they were looking at trying to make sure people can use AI and data to be creative. He said it was important schools and universities need to make sure they’re working with their students to learn not automate.
Stewart concluded by saying ethics is key to everything, from the start, from the point of creating something, that AI is a tool, it’s not yet an entity, and that governance should be used to understand why you’re doing something and also how you’re doing it.
“Part of what we’re doing, what I’m passionate about, is to try and spread this information around, to run events, to help people make connections and to share knowledge,” he added.