Tanmay Bakshi talks programming with Julia and Swift
Programmers are problem solvers first, and code writers second, says Tanmay Bakshi in this interview. He shares how programmers should learn to code and develop machine learning apps.
The next wave of programming will focus on AI and machine learning, and mobile application development. For enterprises to take advantage of these emerging technologies, they need to put quality first, which means both assessing how schools and businesses train programmers and learning the practical limits of AI.
But, if Tanmay Bakshi could learn to harness machine learning by age 16, so can enterprise developers. Bakshi, a software developer and developer advocate at IBM, aspires to teach young and inexperienced programmers how to work with programming languages for machine learning. In particular, Bakshi favors languages, like Julia and Swift, that are supported by LLVM, a library for modular and reusable technologies used to create machine code.
In this wide-ranging interview, Bakshi covers exciting developments in programming language compilers, the reach of machine learning and how to handle bias, as well as how to learn coding.
Two promising machine learning languages
Bakshi favors two languages for machine learning:
- Julia, a high-level, just-in-time compiled programming language tailored for scientific and numerical computing.
- Swift, a general-purpose, ahead-of-time compiled programming language, developed and most often used to create apps for Apple platforms.
Julia is a popular language for machine learning thanks to simplicity. Developers can accomplish more in Julia with fewer lines of code, Bakshi said, and write an app entirely in that language -- no need to write components in C, for example.
"[The Julia language] leverages this new compiler technology in order to create a programming experience that is as intuitive or as easy as Python's, but, at the same time, it's as high-performance as a language like C," Bakshi said. "So your code can be compiled on the fly."
While the Swift programming language offers potential for machine learning, Bakshi said, it's primary use is for iOS mobile app development. Apple created the programming language for native mobile app development. Between its simplicity and easy integration with Apple platforms, Swift has quickly caught on among new and experienced programmers alike.
"Developers love to program in Swift, and it's a great way for kids to get started, because the syntax is just generally so simple," Bakshi said. "[Native] integration into a platform really sets applications apart."
Quality in machine learning apps
With any language for machine learning, the question remains: How can organizations ensure machine learning data quality? Machine learning apps present interpretability and bias issues that are difficult to debug. While there's some work to be done in this area, Bakshi said, a human must be in charge of validation if the stakes are high.
"If you're doing something as important as diagnosing cancer, then obviously, there's still going to be a human in the loop that is looking at the model, is looking at its predictions," he said. "That human in the loop is the one verifying [the prediction], to make sure there's no edge cases that the network is missing -- or that the network is making a mistake. But, at the same time, the network exists to enable that human to not have to do everything from scratch -- to only need to do verification."
Bakshi is optimistic that both IT professionals and the general public will begin to understand AI on a deeper level in the near future, which will help debug quality issues when they pop up.
"We're going to start to see people understand the reason for some of these problems, things like the bias problem," he said. With what Bakshi called "less-inflated expectations" of AI, people will be able to understand the reasoning behind some of its problems and solve them.
Inside Bakshi's world
In the interview, Bakshi used one of his projects at IBM to explain how innovators should approach scientific advancement. With straightforward software projects, developers can build off a wealth of experience and know-how -- their own, as well of that of coworkers and the broader development community -- to get a product released in just a few iterations. But scientific advancement requires more creative thinking, not to mention lots of trial and error. Thus, programmers and innovators must see failure as a key step in the process.
"I have been working on a new project in the field of quality assurance, where what I'm working on is scientifically innovative," Bakshi said. "What I'm working on in terms of compiler technology or machine learning technology doesn't exist yet. So, there's a lot of trial and error going on with some of the initial components, like, for example, [with] this custom instrumentation that we're building. This requires many, many stages of just starting from scratch, trying again."
Unfortunately, he says, young programmers are often being taught fundamentally the wrong way. Despite the unique advantages to discover in each programming language, he says programmers should learn to solve problems first -- not memorize functions and expressions without context. Bakshi put these lessons to work in two of his books, Tanmay Teaches Julia for Beginners: A Springboard to Machine Learning for All Ages and Hello Swift!: iOS app programming for kids and other beginners.
"Learning by example, learning by solving problems that you think are interesting, that's what's really going to help, because then you're coming up with the problems, you're coming up with the logic to solve them, and then you're going ahead and implementing the code for that," he said.
Bakshi also discussed in the interview what he sees in the future for machine learning and AI, and what he personally hopes to accomplish in the decades ahead.
Editor's note: Bakshi spoke with site editor David Carty and assistant site editor Ryan Black. The transcript has been lightly edited for clarity and brevity.
David Carty: Tanmay, you've learned a lot as a young, aspiring developer. I'm sure you built some apps from a young age. For kids today who are experimenting with programming, what sorts of guardrails would you recommend they put in place? Young programmers can make mistakes -- so can professional ones, by the way -- but how should their parents or instructors, or how should they even, prevent themselves from making a costly mistake?
Tanmay Bakshi: So, I feel like it's important to generalize there, and to really specify what we mean by 'mistake.' What I'll start off by saying is that, I do currently believe that a lot of the ways that young people are taught to code is wrong. It's fundamentally not being taught the right way. And the reason I say that is because, look, if you take a look at what programming is, I believe that programming isn't about writing code. It's not about taking a language and writing code in that language. It's more about how do you take a problem, and how do you deconstruct it into its individual pieces and use the building blocks that programming languages provide you with to solve that problem? Then going from that logic, and that flow to then writing code is simple. That's the easy part, right? I feel like currently, the way young people are taught to code is very, very code-centric. It's very programming-centric, and not logic-centric. The goal is, 'Hey, look, you can type this code, and you can make the computer do something,' not, 'Hey, there's this logic that you can write to solve this problem, and the computer is the one solving this problem if you write the code for it.' So I feel like that's one thing. Sometimes it can be difficult to teach in that way.
Then, as you mentioned, everybody makes mistakes -- not just beginners, professionals make mistakes as well. When it comes to programming and technology in specific, although this does really apply to every single field, you can't expect things to work on the first try, right? I know for a fact that whenever something complex has worked on the first try for me, it's usually because it hasn't been tested hard enough. So really, I believe it boils down to a few key things. First of all, make sure that you're not just following a prebuilt curriculum or course or something of that sort -- a course is very, very important as a guideline to keep in place. But, learning by example, learning by solving problems that you think are interesting, that's what's really going to help, because then you're coming up with the problems, you're coming up with the logic to solve them, and then you're going ahead and implementing the code for that. Being able to practice that computational thinking is absolutely key. From there, perseverance and persistence are also really, really important, because, again, when it comes to technology, things don't work on the first try. They take many tries, and it's really sort of this evolution, and remembering that every single time you solve a problem you're learning, and you won't be facing problems like that again.
So that's actually something that I try and do with my books as well. I like to share my knowledge with all sorts of media, books being one of them. I don't just want to write books that have, you know, 'Okay, here's a programming language. Here's how you use all the individual components,' just like a lot of other books. But, rather, in every single chapter, how can we build actual example applications that leverage all of the different building blocks that we've talked about so far to build more and more and more complex applications? And, in doing so, how can we, first of all, step back from the code, look at the problem, solve the problem and then write the code -- even for complex topics, like machine learning. So, for example, my latest book on Julia, Tanmay Teaches Julia. That book, I've written it with McGraw Hill, it's the first in a series called Tanmay Teaches. This book is for all sorts of beginners that want to learn the Julia language. We start off from scratch [as if] you've never programmed before. And, at the end, I actually introduce you to machine learning technology through simple examples, even getting into Google deep dream. All of that happens throughout the course of the books with examples, teaching you that computational thinking. I feel like that's the most effective way to learn how to program: focus on logic, focus on computational thinking and problem solving. Code writing comes second.
Ryan Black: So, you mentioned, of course, the way people actually learn programming languages, like they're not going to get things right the first time. But, of course, I think what a lot of IT enterprises try to do is try to reduce the number of first-time errors, because they [want to] get a product fast out to the market. So, how would you balance that reality? What you're saying is, you're describing an approach that really sounds more conducive to the way people actually learn about programming.
Bakshi: It kind of depends on how you look at it, right? If the question is, 'How should young people learn how to code without having to think about an enterprise mindset just yet?' Then, yeah, that's the way that I believe young people should learn how to code. You should be learning by example, you should be learning with logic and less on actually code writing. But, then, if we were to sort of switch over to enterprises and how we build and release production-grade products, then things change a little bit.
As I mentioned, things don't usually work on the first try. Also, one more thing that we have to make a distinction between would be scientific advancement in innovation and then also just releasing products. So, for example, at IBM, let me give you an actual firsthand example: I have been working on a new project in the field of quality assurance, where what I'm working on is scientifically innovative. What I'm working on in terms of compiler technology or machine learning technology doesn't exist yet. So, there's a lot of trial and error going on with some of the initial components, like, for example, [with] this custom instrumentation that we're building. This requires many, many stages of just starting from scratch, trying again. However, when it comes to things that are more standard, like, for example, IBM Watson Studio, which used to be called Data Science Experience, that took IBM six [or] seven months to get an initial prototype for, and then another couple of months to release it out to the public. There was no [fundamental] innovation required in terms of advancing technological barriers. But, instead, it was 'Alright, you know, all these little bits and pieces work. They're already there, we've got to put them together into one easy-to-use package.'
So, I would say it depends. I feel like the scope of the question sort of increased there. But if it's about young people learning to code, then yes, it's about that logic. It's about learning that nothing's going to work on the first try, but you're going to gain more experience and that experience is going to help you in the future. When it comes to building applications, for example, IBM Data Science Experience or Watson Studio, that's going to be super helpful. In a few iterations, you're going to have complex technology working. But, then when it comes to research or scientific advancement, then again, it comes back to that, for lack of a better phrase, trial and error. You're going to need to try multiple different things to see what works and what doesn't.
Black: I also wanted to ask you, so, say I have no preexisting knowledge of Julia, which of course you wrote a book about, but I want to build an app with Julia. What [aspect of the] programming language should I set out to learn about first?
Bakshi: Here's the thing. Julia has been purpose-built for scientific computing and machine learning and AI. So, you would think that, if I want to develop an app in Julia, then maybe I should start programming somewhere else first, and then come over to Julia, since it's meant for all these complex topics, right? I wouldn't necessarily advise starting off in a language like C or C++, because then, again, you're focusing too much on the nitty gritty of code and not enough on computational thinking, assuming you've never programmed before. You would think that there would be a similar structure from Julia, but I'm glad to say that there isn't. Julia leverages compiler technology, which is something that I'm really passionate about; it leverages this new compiler technology in order to create a programming experience that is as intuitive or as easy as Python's, but, at the same time, it's as high-performance as a language like C. So your code can be compiled on the fly. It can do type inference. It can do all these wonderful things, and it can do it really fast to the point where you can write CUDA kernels in Julia; you can write all sorts of things and have them run at like C [or] C++ performance. So that's sort of the entire goal of Julia is to be a language that is as easy to use, easy to learn as Python, but still high performance enough to be used for all sorts of different applications like machine learning and support native compiler extensions, all sorts of things.
Essentially, what I'm trying to say is that you don't need to start off with another language. You can start off with the Julia language because of the fact that it's so simple, because of the fact that it offloads a lot of the work from you to the compiler. Julia is a great way to start. It enables you to focus on computational thinking, but at the same time get into next generation technologies.
Black: I'm glad you brought up machine learning because I wanted to ask you about how programmers can make use of machine learning libraries with Julia. What are some of the unique benefits to the development of machine learning apps with Julia?
Bakshi: Sure. So, look, machine learning technology and Julia fit perfectly together. And the reason I say that is because, again, machine learning is important enough -- that, many different companies believe. [Julia provides] first-class compiler support. So, what that means is, machine learning technology is an important enough suite of algorithms, a suite of technologies, that maybe we should be integrating that capability directly into the programming language compilers to make our lives easier. Very, very few technologies have seen this sort of purposeful integration within programming languages, but Julia is one of the languages that fits with that sort of model.
So, for example, let's just say we want to work on one of these exotic machine learning, deep learning architectures called tree recursive neural networks. So, not recurrent neural networks -- they don't understand data over time; they understand data over structure, so recursive neural networks. So, for example, let's just say you would take a sentence and then split it up into a parse tree using the Stanford [Natural Language Processing] NLP parsing toolkit. If you were to take that tree, how can you then leverage not only the words in that sentence and the tokens, but how can you leverage the structure of that sentence that the toolkit extracts? Traditionally, with TensorFlow, this would require multiple hundred lines of code because then you have to do this weird hack in order to have the model understand different kinds of inputs based off of whether you have a leaf node or you have a node with multiple different inputs and different details; it gets really, really complex handling the differentiation and the gradient descent for such an exotic architecture, because this isn't usual. This isn't something you do every day with TensorFlow, and it's very, very use case specific. But then, when you come over to Julia, suddenly it's five lines of code, because now the compiler is able to look at all of your code at once and say, 'All right, here's a graph for literally everything you're doing just by doing source to source automatic differentiation.' So using this new library called Zygote, you can actually take literally any Julia function, you can take a look at the LLVM code for it, and you can generate a new LLVM function, which is the intermediate representation for Julia, that represents that same function's gradient.
So these sorts of language-native compiler extensions, for example, are just one of the reasons that Julia is perfect for machine learning. There are multiple different libraries that make great use of it. For example, the Flux library is built in 100% Julia code, and it can run exactly what I'm saying -- like, five lines of code for a recursive neural network. It can run all this sort of stuff; it can run on GPU; it can run on all sorts of different accelerators without having to use any C code. It's all written in native Julia, because Julia has the capability to compile down to whatever it is that these accelerators need. So, the fact that we're only using one language makes it so people can actually contribute to Flux. The fact that Flux is being written in Julia makes it so you can define exotic architectures in like five lines of code. All of this added up together makes Julia like the perfect environment for machine learning, deep learning development.
Black: It sounds like, in a nutshell, what Julia does for machine learning development is essentially just make it much simpler, [it] reduces the number of lines of code that developers have to look at or write. Is that correct?
Bakshi: In essence, yes, but I would also add on to that. First of all, it reduces the amount of lines of code that a developer would need to write or would need to look at. But it also reduces the number of languages you need to use. So you don't need to write components in C or components in Fortran or whatever. You can just write everything in Julia.
Carty: Tanmay, in conversations I've had about AI in QA circles, things like AI bias come up. Some people look at machine learning and neural networks as a black box. I know some machine learning practitioners might bristle at that, but that is some of the perception out there. So, there's still a lot of worry about how to ensure quality. In machine learning models, how do you think organizations should approach that?
Bakshi: I would say that the way you apply machine learning technology, and the way that you solve problems like this -- we touched upon many different problems, right? So, the interpretability, the black box problems, that's one group. Then we touched on the bias problem. I would say that all of these problems don't have one individual clear-cut solution. It depends very specifically on the kinds of architectures, kinds of models you're using, where you're using them, why you're using them. It depends on a lot of different things. Now, I will say, if we were to want to generalize a little bit, there are some solutions to these problems. For example, for bias, there's the open source AI Fairness 360 toolkit that you can use from IBM Research that enables you to use some cutting-edge techniques to identify and bias your data and even machine learning models, and try [to] dampen that.
If you were to think about interpretability, then you could probably use something like [IBM's] Watson OpenScale. You can use those services in order to take a look at what exactly it is your models are interpreting from your data. Granted, these don't work on more complex, deep neural network models; they do work on machine learning models. Now, let's just take a look at a simple example. Let's say we're taking a look at neural machine translation, and we want to be able to figure out why exactly we change from one word or one sentence in English to another sentence in say French or Russian. In that specific case, if you were using a sequence-to-sequence neural machine translation model -- this is a couple months old nowadays, we've got better models, but let's just say you are -- then you could be using the attention layer from your neural network in order to try and figure out what the relationships are between words in both sentences. And that's something that you can display to your user and say, 'Hey, here's what we're translating from English to French. So, if you see something wrong here, that's because the model didn't understand. Something to do with the translation.' So that would be what you would do in that specific circumstance. If you're doing visual recognition, the most you can really do in that case is something like the gradient class activation maps, Grad-CAM, where you actually take a look at the image, and you take a look at the class and you go backwards. You take a look at the gradients with respect to the input, and based off of that, you're trying to determine what parts of the image we're telling the neural net that, 'Hey, these are the classes within the image.' So it really depends on the use case when it comes to interpretability.
In terms of trusting models, I would say that, again, it boils back down to the use case. If you're doing something as important as diagnosing cancer, then obviously, there's still going to be a human in the loop that is looking at the model, is looking at its predictions, and potentially even looking at some high level, for lack of a better word, reasoning -- so, for example, using Grad-CAM, to see why the model came to a decision. That human in the loop is the one verifying that, to make sure there's no edge cases that the network is missing or that the network is making a mistake. But, at the same time, the network exists to enable that human to not have to do everything from scratch -- to only need to do verification, and to speed that process up, and in some cases increase accuracy by dealing with mnemonic cases better than a human can. That's one thing. But, if you're talking about something as simple or harmless as keyword prediction -- so, for example, you're typing on your phone and the quick type predictions on the top of the keyboard -- something as harmless as that doesn't really need as much verification or trust. Something like Gmail's Smart Compose doesn't need as much trust. So, I would say that boils down to the industry, the regulations around it, [and] how much of an impact it's going to have on people's lives. So, when it comes to really trusting machine learning models, if it's something that people's lives depend on, there should be a human in the loop as of now. Of course, there are some exceptions to this rule; I know that as well. I mean, you could take a look at things like self-driving cars, people's lives are in machine learning's hands at that point as well, but there's no humans in the loop. So that's sort of where we have to draw the line. Have we trained this machine learning model with enough edge cases or enough data for us to be able to trust that, in the majority of circumstances, it won't make a mistake? A mistake is inevitable, but can we trust that in the majority of circumstances -- at least more than a human -- it won't make a mistake? So, it really depends is what I would say.
Carty: To switch gears a little bit. You also wrote a book called Hello Swift!: iOS app programming for kids and other beginners. I saw a story recently in Business Insider just the other day that said more than 500,000 apps in the Apple App Store are at least partially written in Swift. What makes this language so intriguing for mobile app developers, and what makes it easy for young programmers to pick up, which basically sparked the book idea for you?
Bakshi: For the longest time, my two favorite languages have been Swift and Julia. The reason I say that is because they're very, very useful in their respective areas. Nowadays, Swift has started to become more useful outside of even just mobile app development. But what really sets Swift apart is -- what I would say, and this is very subjective and in a way, it's difficult to describe exactly what makes Swift so powerful, apart from its compiler technology and its optimizations and the fact that it was built for mobile development -- even its memory management arc, it uses automatic reference counting on garbage collection, which is great for mobile. There's all sorts of things that it does there.
But then there's also just generally the, again, for lack of a better word here, the elegance of the language is something that does set it apart. The features that the language has in terms of complex support for generics, value and reference types, COW (copy on write), all these different things that the language supports, but yet being able to stitch them together in the way that Swift does, is something that not a lot of languages get right. So, developers love to program in Swift, and it's a great way for kids to get started because the syntax is just generally so simple.
It gives you a great idea of what it's like to program in ahead-of-time compiled languages, unlike Julia, which is just-in-time compiled, so it can be more like Python. [Swift] has to be more like C or Java. At the same time, it's simple, it's easy to use. You can write Swift code that is very, very easy to read, but still fast, [and] at the same time, if you wanted to, you could use all sorts of [APIs for C] and write really, really low-level code and manually manage memory. You can go from trusting the Swift compiler entirely, to essentially barely using it.
So, the flexibility and dynamic nature of the language and the fact that it can all coexist is really, really nice. Then, of course, there's the fact that it uses the LLVM compiler infrastructure. Now, the fact that it uses LLVM, and the fact that, in the future, we'll be using MLIR -- which is multi-level intermediate representation, a new project at LLVM -- means that things like Google Swift for TensorFlow can actually leverage that in order to extract TensorFlow graphs directly out of Swift code. What that means is, we can write machine learning models very, very simply in Swift code as well, just like Julia. Now, Julia still has a few advantages in terms of native compiling extensions and things like that. So, Swift for TensorFlow doesn't solve the many language problem, but it does solve that hundreds of lines of code for an exotic architecture problem. So, the fact that it's so simple to use, the fact that you can go from zero to 100 -- in terms of control -- while still enabling all of that code to coexist, really makes it so that the language is something that developers love to use.
Black: I was curious how Swift, in particular, compares to hybrid app development, since we're talking about the creation of apps for iOS and Android. What are Swift's specific advantages compared to hybrid app development?
Bakshi: I feel like if we were to talk about Swift specifically, then it's not even about the language, right? It's important to define between the language, the framework and the experience. Swift as a language isn't really an app development language. It's an open source general-purpose programming language. Then, if we were to take a step back, you can also use Swift for mobile app development on iOS. Apple provides the SDK for that, Swift UIKit, Cocoa, Cocoa Touch. All these things are what Apple provides to help you do app dev in Swift. But Swift itself is a general-purpose programming language. Now, I guess what your real question in that case would be, what's the difference? Or what's the advantages of using Apple's native development frameworks like Swift UI, or like Cocoa Touch, versus using something that enables you to develop hybrid apps?
Now, I am personally a really, really big fan of developing native applications for different platforms. The reason I say that is because of integration into a platform. Integration into a platform really sets applications apart. So, if you take a look at generally why a lot of people even buy Apple products in the first place, it's because they work together really, really well. If you buy an iPhone, eventually, you'll be pressured into buying a Mac, and you'll be pressured into switching from Google Drive to iCloud, and you'll be pressured into switching from Spotify to Apple Music, and all this stuff, just because everything works together so incredibly well. Not even just that, even third-party applications integrate into the Apple ecosystem very, very well, because they feel like they're already a part of the phone. If you use an application that you download from the App Store that is natively built for iOS, you're using all those native iOS UI components, you're using all of that. That makes it feel like just another part of the Apple experience. The way that you can provide that experience is by using the frameworks that Apple gives you. If you're trying to generalize among platforms, so if you're trying to build one app for iOS and Android, suddenly you have to generalize, 'Hey, there's a UI TableView [in iOS] and a list in Android. We're just going to call this a list.' You lose some of that platform specific functionality that makes it feel like an Apple app, or on Android would make it feel [less] like an Android app. So the feeling of pain is a context switch. I just left an Apple environment and entered this new environment, now I'm going back to Apple. I don't want those context switches. I want to have a native experience across platforms, which is why using these platform-specific SDKs is sometimes really advantageous.
Carty: To end on a more future-leaning question. You'll likely be around to see the future of AI and machine learning and data science and application development. Heck, you might even have a hand in shaping that future. So, what do you see the possibilities of these technologies in the future, and what do you personally hope to achieve in these fields?
Bakshi: First of all, I would say that when it comes to a technology like AI or machine learning, it's really very difficult to predict what the future looks like. I think that's because every day things are changing; there's a paradigm shift when you don't expect it. So, I mean, just a couple of years ago, we never would have thought neural networks would have been this good at generating data. But then suddenly, Ian Goodfellow, one night invents the generative adversarial network, and things change, and there's exponential growth from there. Things change very, very quickly.
But what I will say is, in general, I see a few things. First of all, this isn't even a technical thing. I just feel like people's expectations from AI will finally start to die down a little bit and become a lot more realistic. Right now, we have hyper-inflated expectations of what AI is going to do, and how it's going to work, and it's going to become as intelligent as humans and all that sort of stuff. As we see more and more people continuing to make these predictions, but then not actually happening or then following through, we will start to lose trust in those predictions and actually start to have a more realistic view of what artificial intelligence means for humanity. So, that's one thing: the general public will have a better perception of AI in the coming future. I can't say when people's perspectives will start to shift, but I hope that within the next few years, it does, because we need to make better and more informed decisions based off of those views. That's one thing.
The second thing I would say is that we will start to see a lot more of these major challenges -- that we even mentioned -- start to get solved, things like interpretability. How can we explain what a neural network does? How can we explain what a machine learning model does in terms of decisions? How can we inject our own knowledge into neural networks? How can we make it so they're not just reliant on data? How can we also enable them to work off of data or knowledge that we already have structured? IBM is already working on some technology. IBM Research has neural networks that can do symbolic reasoning, apart from just the data that they're trained on. So, they're working on that sort of stuff. I would also say that we're going to start to see people understand the reason for some of these problems, things like the bias problem. People need to understand that's not because artificial intelligence is saying, 'Hey, I don't like this kind of human,' but rather, [it's] because our data is fundamentally biased. It's not just human-specific qualities like race or gender or whatever. It's also just general [data] -- you could have an aircraft engine that a certain neural network is more biased to. It's just about the data and the mathematics behind it. So those are a few things. We're going to start to see less-inflated expectations for what AI is going to be doing. We're going to start to understand what the technology is about. We're going to start to see some of these problems solved. We're going to understand the reasoning behind some of these problems as well.
Switching to what I want to be doing in these fields though -- there's a lot of stuff that I want to do. I love working with next-generation technologies, artificial intelligence and machine learning being the sort of main suite of technologies that I use. Mainly, I want to be applying these technologies in the fields of healthcare and education. So, really taking this tech, taking healthcare and education fields, where I believe it can make an impact, and enabling people within these fields to leverage the technology as much as possible. I have a [YouTube] series that I host at least once every month called TechTime with Tanmay. In the first episode, which I was actually hosting with Sam Lightstone and Rob High, who are both IBM fellows, my mentors, they're [IBM department] CTOs. To quote Sam Whitestone, he said, 'Artificial intelligence technology won't replace humans, but humans that use artificial intelligence will be replacing humans that don't.' So, I really love that quote, and I really want to enable as many humans as possible to leverage this technology in the most accessible way. Apart from that, not even just applying this tech, but also enabling everybody to use it, taking whatever it is that I learn about it, sharing it through resources, like my YouTube channel, the workshops that I can get up, the books I write -- Tanmay Teaches Julia [and] Hello, Swift are some of them. Then, from there, [I'm] really just working toward my goal of reaching out to 100,000 aspiring coders. So far, I'm around 17,000 people there. But, yeah, that's sort of what I want to do. Implementing this technology in fields like healthcare and education, enabling developers to make use of it and sort of sharing my knowledge at the same time.