Making sense of our kaleidoscopic visual world has been a decades-long grand challenge for computer scientists. That’s because there’s so much more to vision than mere seeing. To make the most out of machines, and ultimately have them move usefully and safely among us, they must understand what is happening around them with a superhuman degree of confidence.
The knowledge humans bring to every scene we encounter is what imbues that scene with meaning and enables us to respond appropriately. In the early days of computer vision (CV), artificial intelligence systems could only learn to discern via training on huge numbers of example images painstakingly annotated by humans — a process known as supervised learning.
When electrical engineering undergrad Yong Jae Lee first got hooked on the CV challenge, about 15 years ago, supervised learning reigned supreme. Back then, to teach a CV system how to spot a cat, you had to show it thousands of pictures of cats, with a box painstakingly drawn around each feline and labelled “cat”.
In this way, it could learn the constellation of features that makes felines uniquely identifiable. The idea that a CV system could learn to pick out the many important features of the visual world with little or no help from pre-labelled data felt so distant and difficult, even attempting it felt borderline pointless to many in the field.
But Lee, now an associate professor at the University of Wisconsin-Madison, felt strongly even back then that the future of CV lay in unsupervised, or weakly supervised learning.
The idea for this form of machine learning (ML) is that a CV model takes in large amounts of largely unlabelled images and works out for itself how to distinguish between many different classes of objects contained within them, from cats, dogs and fleas, to people, cars and trees.
“Back then, unsupervised learning was not popular, but I had no doubt it was the right problem to work on,” says Lee. “Now, I think almost the entire community believes in this direction. Huge progress is being made.”
This shift towards unsupervised (aka self-supervised) learning was brought about by the deep learning revolution, says Lee. In this paradigm, ML algorithms have been developed that can extract pertinent information from enormous amounts of raw, unlabelled data. This learning has been likened to how babies learn about the world, albeit on digital timescales.
The blistering rate of success of deep learning means the content of Lee’s graduate teaching evolves from one semester to the next.
“The state of the art this month will no longer be so next month,” he says. “There are frequent surprises, and paradigm shifts every few years. It’s a lot to navigate, but an exciting time for students.”
When he’s not teaching, Lee is pushing the boundaries of both supervised and self-supervised approaches to CV. In 2019 he received an Amazon Machine Learning Research Award (now known as Amazon Research Awards), in part to support a series of pioneering papers on real-time object instance segmentation.
Object instance segmentation goes a lot further than visual object detection: it is the ability of a CV model to not only detect that there are objects somewhere in an image, but also to accurately locate and classify each object of interest — be that a chair, human, or plant — and delineate its visual boundary within the image.
With instance segmentation, not only is every pixel in an image attributed to a class of object, the model also differentiates between two objects of the same class by clearly segmenting each “instance” of that class of object.
The challenge in 2019: although this instance segmentation task could be done to a high standard when applied to individual images, no system could yet hit high-accuracy benchmarks when applied to real-time streaming video (defined as 30 frames per second or above).
It is important for CV systems to comprehend visual scenes at speed because a range of burgeoning technologies depend on such an ability, from driverless cars to autonomous warehouse robots.
Lee, then at the University of California, Davis, and his students Daniel Bolya, Chong Zhou, and Fanyi Xiao, not only developed the first model to attain such accuracy at speed, but also managed achieve it by training their model on just one GPU.
Their supervised system, called YOLACT (You Only Look At CoefficienTs), was lean and mean. It was fast because the researchers had developed a novel way to run aspects of the instance segmentation task in parallel rather than relying on slower, sequential processing. YOLACT won the Most Innovative Award at the COCO Object Detection Challenge at the International Conference on Computer Vision in 2019.
Since then, Lee’s team has gone on to markedly improve the efficiency and performance of the system, and the latest version of YOLACT called YolactEdge (built with students Haotian Liu, Rafael Rivera-Soto, and Fanyi Xiao) can be carried in a device no bigger than your hand. And by making the YOLACT code available on GitHub, Lee has put the system into many people’s hands.
“It’s had a big impact. I know there are a lot of people using YOLACT, and at least one start-up,” says Lee. “This is not some intellectual exercise. We’re creating systems with real-world value. For me, that’s a tremendously exciting feeling.”
In another branch of Lee’s work, also supported by his Amazon award, he pioneers new approaches to ML-based image generation. One example of another research first is MixNMatch, a minimal-supervision model that, when supplied with many real images, teaches itself to differentiate between a variety of important image attributes. By learning to distinguish between an object’s shape, pose, texture/colour and background, the system can employ fine-tuned control to generate new images with any desired combination of attributes.
Lee continues to build on such work. This year he and his current and former students (Yang Xue, Yuheng Li, and Krishna Kumar Singh) unveiled GIRAFFE HD, a high-resolution generative model that is 3D aware.
This means it can, among other things, coherently rotate, move and scale foreground objects in a scene while independently generating the appropriate background. It is a design tool of enormous power with a near human-like grasp of how an image can be realistically, and seamlessly, transformed.
“As a user, you can tune different ‘knobs’ to change the generated image in highly controllable ways, such as the pose of objects and even the [virtual] camera elevation,” says Lee.
The depth of visual understanding required by such models is too big to depend on supervised learning, he adds.
“If we want to create systems that can truly absorb all of the visual information that, say, a human will absorb in their lifetime, it's just not going to be feasible for us to curate that kind of dataset,” says Lee.
Nor is it feasible to develop such technology without significant computational resources, which is why Lee’s Amazon award included credits for Amazon Web Services.
“What was particularly beneficial to our lab was Amazon’s EC2 [Elastic Compute Cloud]. At crunch times, when we needed to run lots of different experiments, we could do that in parallel. The scalability and availability of machines on EC2 has been tremendously helpful for our research.”
“A photo of an astronaut riding a horse” #dalle pic.twitter.com/4UDwErtEbZ
— OpenAI (@OpenAI) April 6, 2022
While Lee is clearly energized by many aspects of vision research, he sees one looming downside: the massive influx of AI-generated art being published online.
“The state of the art now is to learn directly from internet data,” he says. “If that data becomes populated with lots of ML outputs, you’re not actually learning from so-called true knowledge, but instead learning from ‘fake’ information. It isn’t clear how this will affect the training of future models.”
But he remains optimistic about the rate of progress. The semantic understanding already being demonstrated by image-generation systems is surprising, he says.
“Take Dalle-2’s horse-rising astronaut. This kind of semantic concept doesn't really exist in the real world, right, but these systems can construct plausible images of exactly that.”
The takeaway lesson from this is that the power of data is hard to deny, says Lee. Even if the data is ‘noisy’, having enormous amounts of it allows ML models to develop a very deep understanding of the visual world, resulting in creative combinations of semantic concepts.
“Even for somebody working in this field, I still find it fascinating.”
What advice does Lee have for students looking to branch into his dynamic field?
“There is so much activity in this machine learning space, what's really important is to find the topics you're really passionate about, and get some hands-on experience,” says Lee. “Don't just read a paper and then presume you know what you need to know. The best way to learn is to download some cutting-edge open-source code and really play around with it. Have some fun!”