Q&A With Andrew Gardner | Avast
2021-08-26 14:54:02 Author: blog.avast.com(查看原文) 阅读量:38 收藏

, Aug 26, 2021 7:54:02 AM

Gardner explains how AI can rebalance the computer security game and teach us about human identity

Andrew Gardner, Ph.D., has collected comic books since he was a kid. Back then, his favorite character was Iron Man because — unlike other superheroes — Iron Man created his special abilities: he designed and built his suit from scratch, and then he used it to explore and protect the world. And that, Gardner says, is the ideal artificial intelligence (AI) embodiment to have.

Andrew recently joined Avast as its new VP of Research and AI. He’s fascinated by cool technology, both fictional and real, and is a leading researcher in the AI and machine learning (ML) communities. At Avast, he hopes to move the industry forward by helping shape our future conception of computer security, moving beyond the traditional idea of protection from file, script and email threats, to systems which protect transactions, interactions, conversations and attention.

And he’d also really love a garbage can that emptied itself.

A conversation with Gardner reveals, however, that while the tech is fascinating, it’s the people he’s really interested in. Keep reading to learn more about what one AI expert is excited about, what keeps him up at night, and where he thinks all of this is ultimately headed. 

There’s a lot of hype around AI, but it’s a poorly understood field. What are three things you wish the general public knew about artificial intelligence?

Well, firstly, there’s no universal definition for AI. That is one contributor to hype because so many things — ranging from mundane to fictional — get lumped into the AI umbrella and create confusion, and ultimately disappointment.  

So starting with a good definition helps with hype. I think a good, general definition for AI is an intelligent system or program that’s doing things that humans would do. The system has to be able to sense its environment or collect data in some way and process that data. AI then makes decisions based on that data.  The decision-making bit is the real hallmark of AI.

For example, a self-driving car is heading to an intersection. It records and processes video and sensor data, computes its velocity and checks fuel levels. This is all amazing, and highly technical, but the AI aspect is bringing it all together into moving from point A to point B. Safely.  

For that, the car has to make choices. Does it turn left? Right? Stop? Go forward? What if there’s a pedestrian? How does it prioritize decisions? The decision-making is really important and historically under-emphasized. Without decisions you are probably talking about machine learning, or something simpler. Decision-making is hard and, frankly, we really don’t understand well enough how humans do it to be successful at mimicking them.

This understanding gap mirrors where we see the biggest struggle with AI and society. The AI community is aware of ethical challenges, for example, but not formally set up to tackle these. We’re still in very nascent stages of addressing this scientifically. Originally, researchers and developers just focused on functionality, not bigger ethical questions. The impact of AI on society is complex and we need to have lots of stakeholders participating.

Second, I’d love for people to have some perspective on AI. On the one hand, a lot of what people think of as AI isn’t what a researcher would consider AI. It’s not SkyNet, Terminator stuff. And the media presents it two ways. On the one hand, it’s magic and could be the end of the world. On the other hand, it’s not magic — because my “smart” toaster still doesn’t make my toast right. It can be really hard to determine which is which and the average person gets confused.  

And third, AI is for everyone, not just practitioners. It is going to change our world for decades to come and everyone will interact with it in some form, in a way that is similar to how electrification changed the world over the course of a century. We’re just at the beginning of that change with AI. 

What are you most excited about when it comes to the future of AI?

It’s all kind of exciting! But the most interesting thing about AI, for me, is that it teaches me about humanity. And that’s a really cool thing. AI reproduces how people behave and act — or how we should behave and act. It makes you think a lot about what makes us human. Humans are very, very complex machines. We far eclipse what we’re currently dreaming about for AI. 

I want AI to change my world, but in ways that I can touch and see and feel. There’s a real trend right now of merging robotics and AI to create consumer products. How cool would it be if you had a litter box that scoops itself or trash that takes itself out? We’re even seeing delivery bots and drones.

It really gets interesting when AI starts interacting with the real world. What does a control system for power and traffic look like when the roads are full of self-driving cars? When will we have robots assembling the next generation of robots? I’m excited for us to move faster towards things that benefit society and help us. 

In my big vision of the longer term, robotics would help us innovate and invent as a species. I’m thinking of things like powerful AI that could do drug discovery or medical discovery. Things that would augment our human efforts in a synergistic way, instead of today where we give them specific tasks to complete. I’d like AI to be more of a partner than a very, very junior lab assistant. Iron Man instead of Microsoft Clippy.

What are you most worried about?

Not specific to AI, but to science and technology in general, I’m worried that people don’t give enough consideration to “What if?” We can build self-driving cars, but we start by solving technical problems. People raise potential ethical issues and the community will give a nod to that. But I don’t think they put enough effort into thinking about outlier events. 

For example, imagine there’s a self-driving car economy that flips on overnight. What if 10 million jobs are replaced with that flip? What if the cost of self driving car rides increases the gap between poor and middle class, or across different countries?  What if new crimes are enabled or committed with self-driving cars?  We don’t always think about what the cost of success could be; we just want to win the race and get there as fast as we can. 

Then there are the ways that bad guys can exploit AI, which is where Avast sits. Historically, there have been real deterrents to exploiting security gaps at scale. Things like access to technology and knowledge. These acted as a gating mechanism which made the white hat vs. black hat war somewhat balanced. That’s all changed now. With AI, bad actors can target and automate to create exploits at scale and at machine speeds. Their ability to search for new vulnerabilities has grown exponentially. If it’s been a cat and mouse game in computer security, it’s now tilting toward the cat right now, if the cats are the bad guys. We need AI to help rebalance the game: cat vs. mouse becomes cat vs. robo-dog.

What are your hopes for the future of AI in computer security?

We have to be really disruptive in how we even think about security. We need to think differently: How do we go from a box to a sphere? How do we even change the idea of security? What even is security?

Computer security used to mean — and probably to a lot of people still does mean — antivirus on the computer. But these days we use phones, IoT, tablets, and so on. Our interactions with other devices and other people are amplified by social media, ecommerce and digital transformation in our daily lives. So computer security now is more about making sense of how we, as humans, interact, where we place trust, where we spend our attention. I think of the future of security as a guardian angel that sits on our shoulder and protects us from both clear threats and less clear threats across these new interactions, without requiring a lot of explicit direction.

At the same time, if we really do our jobs well, traditional security products are designed to be forgotten: the user doesn’t hear from us, unless we are alerting them, which is rare. The user experience for security products in this model is atrocious: we basically make a “grudge purchase” to buy “insurance” through security software purchase. 

Can we change this? We need to change this! We have to be able to interact with the user and engage more meaningfully, consistently and usefully. If I could set a goal for this industry, it would be to revise how people view security products. I want them to be something more like a personal assistant or advisor that users trust and are actually interested in engaging with.

Other than AI and machine learning, what’s the topic you can nerd out about for hours? 

My favorite thing about AI, and what I dream and aspire to do, is AI for storytelling. It’s a really hard problem. You have to study how authors or creators go about setting out a story, how it’s organized, even sentence planning. So far, AI doesn’t come close to touching what humans can do. But imagine, though, if you could have a quick conversation with an AI that could generate entirely new books, movie or game worlds with compelling and realistic characters and plot development in the style you like…  

That dream is a way off. Today there’s not really much intelligence in AI, at least not in the general intelligence sense one ascribes to people. Typical systems work like this: you give the AI a prompt like, “it was a bright and sunny day,” and it starts completing the text, maybe a few sentences, for example. If you don’t like the completion you try again and get a new result. The remarkable thing about it to lay people is that the generated text will usually have no grammatical or syntactical errors. But that doesn’t mean it’s sensible. It will generate correct, complete sentences, but they don’t really all hang together. 

Still, there are some fun examples out there. AI Dungeon, for example, is a neat mobile game that uses state-of-the-art AI for an interesting choose-your-own-story approach. Hollywood is interested in AI, too. I’m a big fan of sci-fi and I enjoyed the television show Stargate SG-1. I learned recently that the cast and producer are doing an experiment where they’re having an AI generate a screenplay for an episode and then the cast is going to act it out. My expectations are low, but it should be fun.

Just to circle back to storytelling for a moment, and AI…I love this marriage because it really makes you ask, “How do people think and how do they reason.. How do humans think?  How should (or do) AI systems think?”

Storytelling is so fundamental to our human existence and identity. That’s an area where I’d like to see AI really bloom.


文章来源: https://blog.avast.com/qa-with-andrew-gardner-avast
如有侵权请联系:admin#unsafe.sh