The third installment of FDA’s new podcast series on technology and food safety focuses on artificial intelligence (AI) and its potential to advance food safety.
This quarterly podcast explores the potential for novel technological approaches and solutions in each of the core elements in the New Era of Smarter Food Safety Blueprint. The second Core Element, called Smarter Tools and Approaches for Prevention and Outbreak Response, includes goals to expand predictive analytics capabilities using AI and machine learning tools.
In this third podcast, Frank Yiannas, Deputy FDA Commissioner for Food Policy and Response, and Donald Prater, Associate Commissioner for Imported Food Safety, led a discussion with food industry experts on subjects that include the opportunities that AI offers to help protect consumers from food safety issues, potential uses of AI that food producers could consider, and what’s on the horizon for AI in FDA’s New Era of Smarter Food Safety.
- Maria Velissariou, Global Corporate Research & Development Vice President and Chief Science Officer for Mars Incorporated, a global, family-owned business with a portfolio of confectionary, food and pet-care products and services;
- Nikos Manouselis, founder and CEO of Agroknow, a food safety intelligence company that predicts food safety risks to inform preventive measures; and
- Cronan McNamara – founder and CEO of Creme Global, a company providing food safety data analytics and predictive modeling software and services.
- TechTalk Podcast Episode 1: Tech-enabled Traceability in the New Era of Smarter Food Safety
- TechTalk Podcast Episode 2: Whole Genome Sequencing in the New Era of Smarter Food Safety
TechTalk Podcast Episode 3 - Transcript
Welcome everyone to the FDA’s first TechTalk podcast of 2022! I’m Frank Yiannas, Deputy Commissioner for Food Policy and Response, and I’m here with Dr. Don Prater, Associate Commissioner for Imported Food Safety at FDA. We’ll be co-hosting today’s episode, which will focus on artificial intelligence (AI) and how it can contribute to the safety of the food we all eat.
I think any of you who has heard me speak publicly about AI and food safety during my tenure here at the agency will know that this is a topic that I’m pretty passionate about. Why? Well, I see the use of AI as an absolute game-changer, a powerful new tool that we can add to our food safety toolbox, one that could significantly enhance our ability to create a safer food system.
And AI is actually an important element in our goals that we’ve set forth under the New Era of Smarter Food Safety. Something that we’re doing to bend the curve of foodborne illness in this country, and around the world.
When we look at how other industries are harnessing the power of data to identify and predict trends, it is clear to me, as I suspect it will be to you, that the FDA and food system stakeholders should also be looking at how to tap into new technologies, such as artificial intelligence.
At FDA, we’re interested in strengthening our predictive analytics capabilities through expanded use of AI and machine learning tools, and to use AI to mine information from nontraditional sources, such as social media and apps, to detect outbreaks and supplement traditional health reporting.
In the next hour we’re going to explore, along with our guests, opportunities that AI offers to protect consumers from contaminated foods, potential uses of AI that food producers could start considering right now, not tomorrow, and what’s on the horizon for AI in both government and industry.
And so, with that, let’s get started. Don Prater, over to you!
Thanks Frank. Well, as Frank mentioned, I’m FDA Associate Commissioner for Imported Food Safety and I help oversee that portion of the U.S. food supply that's imported from abroad, for which the percentage of FDA-regulated food products stands at about 15% and growing.
However, for certain commodities, those percentages are really outsized. In 2019, those shipments included 32% of our fresh vegetables, 55% of our fresh fruit, and more than 90% of the seafood Americans love to consume. Later in this podcast, I'll talk more about some areas that FDA is exploring with respect to import screening using AI and machine learning.
And now I have the privilege to introduce our distinguished panel of experts who will share their unique experienced and insights on AI and food safety. Our guests today are:
- Maria Velissariou, the Global Corporate Research & Development Vice President and Chief Science Officer for Mars
- Nikos Manouselis, founder and CEO of Agroknow, a food safety intelligence company that extracts tailor-made data insights for the global supply chain; and
- Cronan McNamara – founder and CEO of Creme Global, a company providing food safety data analytics and predictive modelling software and services.
Thank you all very much for joining us.
Let me begin by asking each of you to share with us ways in which industry is pioneering the use of AI to protect consumers from unsafe food. And would you please also provide a specific example in which you’ve seen AI make a real difference? Maria, would you like to start us off?
Thank you, Frank and Don. I'm honored to be here today, and it is a pleasure to be with Cronan and Nikos too. AI is becoming increasingly embedded in the end-to-end supply chains in agriculture and food.
It gives us algorithms that, when combined with conventional techniques like forecasting, can sharpen and expedite foresights and insights. And AI-powered Internet of Things (IoT) can improve efficiencies, detect defective or unsafe ingredients in food processing, and ensure that food safety protocols are adhered to in compliance with regulations.
The technology has a lot of promise for food safety, but we need to ensure that we have high quality, accurate and secure data and that we have addressed factors like human biases. Leveraging AI allows companies like Mars to make food safer, available to more people and in a more sustainable way by helping reduce the environmental footprint and waste.
One example I would like to highlight is AI-based horizon scanning that generates foresights and insights for food safety and risk mitigation. The value of horizon scanning comes from the data-generated insight, the in-depth risk analysis, and the action that it helps to drive.
And also, horizon scanning can provide more transparency and holistic perspective on recalls and withdrawals and, in this way, facilitate learning and improvement within an organization and across industry.
I would also like to highlight another example, which is aflatoxin prediction. Currently, we have an aflatoxin predictive model at Mars, which we have deployed internally within supply quality assurance to guide sourcing options. We are in the process of building AI-based algorithms to enable us to see patterns and move into predictive analytics.
The model is based on high-resolution meteorological, geospatial, and temporal data that predicts what aflatoxin generates to the field during transportation and storage. The ultimate goal is to provide farmers with preventative tools to mitigate formation in the first place.
And to this end, we are exploring country-based pilots through our food safety coalition partners, which involve academia and nonprofit organizations. Back to you, Don.
Thank you, Maria. Those are great examples. Really appreciate hearing about how Mars is using horizon scanning and, in particular, the aflatoxin prediction model. Nikos, what has your experience been?
Don, Frank, thank you so much for the kind invite. It's a pleasure and honor to join this panel. My example comes also from the area of horizon scanning, risk-horizon scanning, but I want to dive deeper into the use case of food safety incidents that are being announced all over the world.
Not the ones that are announced by the major agencies or authorities like the FDA or the European RASFF (Rapid Alert System for Food and Feed). I refer to the ones being reported by small authorities, such as announcements that come from local municipalities, that are often being published in some unknown website, in one of the national languages or dialects, that come in very unstructured formats. So how can we use AI in order to discover and incorporate data from this type of information source into a horizon scanning software?
This is where we use Web crawling software systems that employ AI in different steps of such a process.
They can identify using advanced technologies, such as natural language processing technologies, what exactly is being described in an announcement. They can translate an announcement that comes in a different language into English. They can use other AI techniques to automatically annotate the specific type of product influenced or the official name of the company that has been involved, or the hazard that the announcement is referring to.
So, they create very structured data records. They can also combine announcements that come from different sources on the same incident so that they can create a more rich, enhanced description of something that is important for someone to know.
By employing these techniques, we can have better information. And wider coverage of data sources in near-real time. Compared to the hours or sometimes days that a human would require in order to perform such a task.
And I can already think of a couple of examples of clients using AI in operational settings. Like a very large beverage manufacturer that is using this real-time intelligence to inform their quality council, the experts meeting of the organization that needs this external risk intelligence on a periodic basis so that it can take decisions. Or a food service chain that incorporates this information in the monthly risk newsletter that is circulated to all the food safety professionals in its ecosystem.
Thanks that's a great example and really appreciate hearing more about how you're looking at the examination of food safety incidents, not only at the national level, but also those that are reported by the local level. We're also interested in natural language processing, one of the disciplines of AI, and also the power of AI to look at structured and unstructured data. So, thank you very much for sharing that. Cronan, over to you. What about your thoughts?
Thanks, Don. Thanks, Frank, for the invitation to be here. Our experience in this field as well, Creme Global has been working with industry for many years and government to help gather anonymize, and structure data so that it can be shared and visualized. And in recent years, we've been aggregating these industry data sets in order to train up machine learning and AI models on the data so that we can uncover hidden patterns and make predictions. I have two examples to share with you today.
My first example is the Western Growers Food Safety Data Sharing Project. This will be a project that you're well familiar with, Frank, as the FDA did provide letter support for the project, stating that it is in line with your vision for the future of food safety.
It's a very exciting project that Western Growers group is organizing to collect data, and we started with a pilot program of leafy green growers in the California and Arizona regions. And now that we've aggregated a number of growers’ data; we've created dashboards where these growers can come in and visualize what's going on with their data at an aggregated and anonymized level. The types of data we collected are information on things like inspections, product testing, water testing and location, and these are now being combined with information from weather, topology, dates, and seasons, and we're training up a machine learning tool to uncover risks and trends that may not be apparent to the human eye, even when you're visualizing these datasets.
We're able to predict risks that are emerging or increasing and allow the growers to understand and benchmark their operations against their peers and their colleagues in the region and understand emerging risks in that region.
The second example is a project called The Sequence Alliance for Food Environments or SAFE project here in Ireland, which was a collaboration between ourselves, Creme Global, and UCD (University College Dublin), and six leading food companies. This was more focused on the factory environment. The goal of the project was to develop a predictive software toolbox to enhance food safety and quality, using environmental information and, in particular, using next generation sequencing. So, we used 16s rRNA methods to swab and gather data at the genus level and because that data is quite complex, combining that then with other information like IoT sensors and traditional microbiological culture results, we were able to use machine learning again in this case to really understand the microbiome of those environments and make predictions of when that microbiome may be changing and the emerging risks of more dangerous pathogens or bacteria evolving in those microbiomes.
In short, the project helped the industry to understand their manufacturing environments in more detail and the machine learning and data management, data analytics methods help them to really understand the potential and emerging risks in those environments and undertake mitigating actions in good time. So those are two key examples I'd like to share today.
Thanks, Cronan and Maria and Nikos. Those are really great and compelling examples, really energizing. We’ve also received some questions from stakeholders about how the FDA will be using AI. Before we move on, Don, would you please tell our listeners about the FDA’s AI seafood pilot?
Sure. Thanks, Frank. The FDA has been conducting a pilot that leverages AI, specifically machine learning, to strengthen our ability to predict which shipments of imported foods pose the greatest risk of violation.
In August 2020, FDA shared some initial findings from a proof-of-concept for imported seafood, in which we trained a machine learning model against two years of data associated with seafood imports and then tested it against the subsequent years data.
The proof of concept suggested that machine learning could greatly increase the likelihood of identifying a shipment containing potentially contaminated products.
We started with seafood because, as I mentioned, we import a significant proportion of our seafood. In fact, over 1,000,000 lines of seafood are offered each year, which translates into thousands of entry lines each day for which FDA must make admissibility decisions, including which shipments to examine, sample and test.
From February through July 2021, FDA conducted the second phase of the AI imported seafood pilot in the field. We wanted to further examine the deployment of an AI/machine learning model and, importantly, its integration into FDA systems and operations. We are currently reviewing these findings and may have more to say about that at a later stage.
But more broadly, FDA is looking to apply lessons learned from the seafood pilot to enhance our predictive capability for other types of food and regulated products. So, stay tuned to this space. Back over to you, Frank.
Thanks Don. I’m very proud of the work that FDA is doing using AI as part of FDA’s New Era of Smarter Food Safety. What we’ve just heard really underscores the promise of AI and how it can truly help strengthen food safety. Now, some of you might be wondering, as am I: That sounds great, but I know there are people representing a variety of different stakeholders, industry, food production facilities. And you might be saying, how can I use AI for what I'm doing in my workplace?
And so, let's explore that for just a bit. Nikos, we understand that AI is wonderful, but it's more than just the technology. It's not that the listeners are chasing the new trend or shiny coin. They want to know how to use this to solve some of their biggest public health challenges.
So, what type of use cases do you think AI can help them address and once they've identified the use case, do you have any specific recommendations for listeners on how to get started?
In my view, every public health use case has to do with emerging risk prevention. At least rapid mitigation, but ideally, also prevention. So how can we put in place services powered by AI, facilitated by AI, that will help us respond quickly or take preventive measures before something hits consumers and the public?
My favorite analogy there is weather forecasting and the way that a combination of public sector and private sector services and infrastructure components are being putting in place, working together to help us foresee, forecast, and address natural disasters.
So, how could we put in place a similar array of infrastructural layers so that we can do something as important as food risk prevention?
I would invite our listeners, regardless of whether they work in the public sector or in the private sector, to start decomposing this problem using such an analogy, into the layers of infrastructure that we need. Where does data reside? How can we collect and get together in trusted repositories, in databases or in larger data storage facilities, all of this information that is important for us so that we can feed predictive models?
And then how can we develop the right models for each purpose? How can we combine scientific, public, and private resources and intellectual power, so that we can develop those predictive models that will help us calculate the likelihood of an event that is coming up in the supply chain?
At the end of the day, how do all these pieces come together so that we can power an ecosystem of services? Both publicly and privately developed and managed services, so that we can use them in given problem settings, in specific decision scenarios, to address and try to help answer specific questions.
I am not thinking about one particular solution to a given problem. I am thinking about use cases that are solving grand public health challenges by decomposing them to different components that have to come together.
Thank you, Nikos, that's wonderful. We often say food safety requires collaboration, and you're just saying we might be able to collaborate on the use of this new and powerful technology. Maria, you and I have talked about AI in the past, and I was struck in our conversations about some points you emphasized with me. That to be successful in an IT project, it’s more than just the IT but it also involves a human element. You persuaded me that this is something we should talk about. Who else in the organization, if people are listening on how to get started, who else should they be including to consider some of these human or behavioral aspects of a transition to AI?
Thank you for the question, Frank. Certainly, this is an interdisciplinary area that requires a lot of collaboration. At Mars, we believe that data and AI and different functions coming together are absolutely vital components for tackling supply chain challenges, both tactical and strategic. And we want to empower our associates to move 100 times faster today, which we cannot do without the power of data and the power of analytics.
And artificial intelligence generates a lot of value and adopting the approach to maximize this value requires the partnership between several functions like R&D (research and development) and the digital technologies, manufacturing, procurement, sales, among others. As Nikos and Cronan highlighted earlier, there is a plethora of data out there and it is important to detect signals from noise and establish the right actions in a timely fashion. Today, we have incorporated artificial intelligence in more than 250 projects and capabilities across Mars, ranging from health care, sustainability to medical systems, to name but a few. It is indeed a key enabler to unlock great speed and volume. We also need to have a willingness to experiment quickly. Stop what doesn’t work and scale fast what does. And we need to push through the hype to understand what problem we’re solving. Not every problem requires an AI solution, and that is why it is critical in the process to employ a user-centric approach.
There may be also a couple of other areas that we need to consider, and upskilling is one of them. Our associates with deep expertise in traditional sciences like microbiology and toxicology may need to become fluent in the basics of AI so that they can understand how AI generates value. They don't need to become AI engineers. Equally, AI engineers need to understand the basics of food science and technology so that they can collaborate with their counterparts. And importantly food safety is a great example of where AI can assist human judgment but cannot replace it. Nor is AI a substitute for good behaviors and practices, which are cornerstones of culture. These need to be practiced and modeled every day and consistently.
That's excellent, thank you, Maria. All right, Cronan, moving over to you. You and I have talked about data for many years now. You've given me great advice and based on our conversations, I've coined this term: AI sometimes seems magical, but it doesn't magically happen. And you've persuaded me that the quality of data is foundational and critical. So, I’ve heard it before, but I was hoping that you could share with our audience, you know, any tips on why this issue of data quality is so important.
For sure, Frank, and that's so true. And you asked me to keep it concrete, so I'm going to try and give you some good concrete tips on data quality and how companies and organizations can manage that.
Of course, first of all, it's important to carry out really comprehensive checks on your data.
We've all heard the saying garbage in, garbage out. We don't want bad data to lead our models astray. And this can be especially risky if there are many steps in the data collection process. So how do you do that?
Well, let's start with trying to have a consistent format for your data sources. And this helps to match up data in the supply chain and also helps error tracking.
And you should review outlier data and question it. Is it an error, a data entry error, or is it a valid data point that’s an outlier in your data? Because it's an important distinction. Review all of your missing data and null data values. What do these really mean? Is it a negative result, or is it just missing data where no test was performed, for example?
And data engineering now has become an engineering process where you have to build in quality assurance checks into your data engineering process.
A good way to check your input data is using visualization. So, visualizing your input data that can be quite important, as important as visualizing the results. Because when you visualize that, you start to see the patterns and you can check quickly if the data makes sense.
You have to make sure you really understand the meaning of each data point and what assumptions are implicit in the data. So, an idea to help with that, you know, read the documentation on the data or even talk to the person or the team who collected the data.
All of the metadata that comes with the data is also so valuable in the model. So, make sure you understand those variables as well.
The issue as well, even though we have a lot of data these days, is bias in the data and instead of garbage in, garbage out, I like to think of bias in bias out. So, if the input data as biased, the results of your model can be biased. So often the data you'll be using in your model was collected for a different purpose, it could have been collected for a marketing purpose or a monitoring program purpose, and you’re trying to use it for food safety.
Checking things for bias, you know, does it cover all scenarios? Does it cover all seasons or was it just done during the summer, for example, are important things to check?
Finally, when there when there are rare events that you're interested in your data, you need to have a large amount of data, so quantity is also important, as well as the quality of the data so that you have enough samples of those rare events.
Those are my top tips for data quality in trying to build machine learning and AI models.
So now, let’s now focus on the future of AI. Maria, what is on the horizon? How is this field transforming?
Thank you, Don. At Mars we’re guided by our five principles with quality being the first principle, and we want to use AI to steer our associates from low-value repetitive tasks so that they can focus on high value, creative work and in this way, move into deeper analysis and judgment. This means that we can use AI to organize huge amounts of data coming from raw materials and manufacturing processes, so our food safety experts can apply their domain specific expertise to interpret the data. It's important to identify bias -- what it is and ferret it out early on in the process.
I would like to use three examples to bring this to life. The first example is about digital quality and food safety. We’re leveraging the power of data to ensure high quality and safe food production through a platform called Mission Control.
The platform analyzes critical performance metrics, performs comparative analysis, and produces new insights in a way that was not previously possible. And we are focusing on the four categories of exploration -- raw materials like the aflatoxin predictive models that I talked about earlier on, quality analytics, finished product and horizon scanning and external listening.
And we're currently driving end-user adoption and working on machine learning for the next generation of capabilities, like risk profile of raw materials. The second example is coming from the area of traceability. We have deployed systems and processes, several of them to enable traceability of raw materials and finished products.
However, we're still relying heavily on human resources to execute the process against standard operating procedures. And we know that the vast majority of events are caused by noncompliance to procedures. By using process mining, we can see how the process is practiced on the shop floor and therefore put in place corrective actions to increase compliance and in this way increase business performance.
My third example is about culture. We want to lead by example as we drive our digital transformation, and we asked the senior leaders to sponsor the change coming from digitalization like predictive and prescriptive analytics.
And we have decided to implement reverse mentoring to give the senior leaders the opportunity to be coached by internal digital experts. And this is an effective way to help them play the role of sponsors in this digital transformation journey.
The second aspect of culture is an inaugural Mars Artificial Intelligence Festival that we ran in 2020. It was a week-long all virtual immersive experience to educate, demystify and celebrate AI among all Mars associates. It was actually a great opportunity to showcase how prevalent AI already is within the organization.
And there's also a lot of learning from each other. And I want to quote here two examples from two very different parts of the organization. One is from our Mars veterinary hospital. In the anatomic pathology division, we have created AI use cases for tumor detection. And now we can read X-rays in a matter of minutes as opposed to days.
The second example comes from the other side of the business from product manufacturing, where we use AI to measure the color blend of Skittles and show a consistent quality. Once again, these allow our associates to focus on more strategic work.
Thanks, Maria. I think those are really terrific examples, very specific, and I know they'll be helpful to our listeners. Nikos, can you provide some insights into the supply chain? How is more visibility in times of pandemic, severe weather, also being impacted by our new technologies, such as AI and machine learning?
These types of insights need to be more systemic and holistic in their view. This means, how can we look at the supply chain as a system that is affected and influenced by different factors that then lead to an emerging risk?
I can share two examples there. One is the example of a major beef producer in the UK that was severely hit by the horsemeat scandal. They took the decision to finance scientific research to help them understand better and map the different paths in the supply chain associated with red meat. And then use this in order to identify the supply chain signals that were associated with fraud and maybe serve as predictors of fraud, as far as beef is concerned. When they did this exercise, they came to us and said, “OK, how can we now use these candidate signals as input in an AI model that will predict beef fraud? And how can this model power a beef fraud prediction dashboard that will operate in real time?” We did a quite elaborate model development together, starting with the obvious signals, like the prices of beef in the different markets from where they are supplied and then looking into other, more systemic signals, like, “Did we have any political events that had an effect on prices? Do we have any other type of indicator, like country corruption risk, that we could associate with the prices of beef?” Through this exercise, we developed a model that was then proven to be able to predict future events in a way that was very, very specific to their needs.
I have another example in mind. This is a model that has been developed by the Food Safety Research Institute of Wageningen University in the Netherlands, trying to look three to five years down the road and see if the levels of residues that we will see in food can be forecasted by using indicators like the level of consumption of chemical products in a given geography. They did this modeling exercise with lots of data from the Netherlands, and they developed a model that is taking as input different country-wide parameters, like how many products were sold as inputs for agricultural production in the country.
This served as input to help forecast the level of residues in food that they should expect a few years down the road.
Thanks, Nikos. Those are some really interesting examples. Cronan, can you tell us about small and medium-size companies? Can they use AI? What is required in terms of technology and expertise? Does open-source data make this a more feasible option?
Sure, Don. And absolutely, yes. Small and medium-size companies are already starting to benefit from AI even if they're not necessarily building it in-house or haven't got currently have the expertise or capacity in-house.
There are a number of industry-wide projects that have taken place that are providing and developing the developing AI models that members of an industry group can then tap into and benefit from. So, these are kind of semipublic and industrial systems that they can access.
And of course, there are companies like Agroknow and ourselves, Creme Global, that specialize in food safety and have the expertise and infrastructure to help companies get started on the mission to start using their own data with some models that have been developed for industry or more widely developed models.
And they can, of course, then start working towards developing their own in-house team like Maria has on her team - a very well-established machine learning and AI team. But if you want to start from trying to build up a team, I think some of the points that were made earlier are really important.
You know, you would need to assign a fairly good scientist to be responsible for this team who probably understands the food safety and the microbiology or toxicology of the situation and then start to recruit and train up the following roles:
Data engineering I'd start with. So, I start with someone who can start to organize all of the data in the organization.
Next, you would want somebody with really good grounding in mathematics or physics who can take on the AI piece and you would need some software visualization expertise and then some computing system experts to manage all of this.
Could you find all of that in one person? It's possible, but very hard. It's very rare to find. So probably you're talking about hiring two or three people there. And so, it's not a trivial undertaking. And would take some investment.
The open-source data is a really interesting way to start, and there's very valuable open-source data being provided in the U.S. by organizations such as the CDC, yourselves (the FDA) and the USDA. And what's great about the open-source data is it provides a starting point and direct access to well-structured data that can kick start the data repository for these companies. And then they can start trying to curate and aggregate their own data and combine it with this open-source data in order to provide an even richer data source that's more relevant to their company’s operations.
Open-source data is very powerful when it's overlaid and combined with industry data or company data. So, these industry groups that get together and start aggregating an anonymizing data and combining with open-source data, I think are using all of the data resources that they can access to create predictive models that can benefit the whole sector and the companies within it.
Thanks, Cronan. That's really fantastic. It's good to hear that there are opportunities for small and medium-size companies to get engaged and utilize artificial intelligence in this way. So, I'd like to thank all of our panelists for today. Really appreciate sharing these great examples. I think it'll be super helpful to our listeners. I’d like to turn back to Frank for his thoughts on today’s TechTalk. Frank?
Thanks so much Don, Maria, Nikos, and Cronan. I think it’s clear from today’s podcast that I’m not the only one who is passionate about AI and its potential to tackle food safety challenges.
All I can say, while there was so much discussed today, I’m super energized by the conversation. But I thought I’d close by just sharing with the audience some of my key takeaways, and I distilled it down to five. Literally. I've been sitting here taking notes as you guys were speaking.
Number One: Better food safety begins and ends with better data. What I heard: Better food safety begins and ends with better data. I know we’ve used great tools in the past – inspectional approaches, the tool of training, the tool of testing. But we're entering the 21st century where we increasingly have the ability to convert large volumes of data into powerful predictive information to tools such as AI and so Cronan you emphasized the quality of data, so better food safety in the 21st century is going to begin with better data.
The second key takeaway for me was that while what we're talking about AI is a technology what you persuaded me, it's not about the technology, it's about the public health problem that we're trying to solve. Not just about technology to the public health problem that we're trying to solve, and I think you guys did a wonderful job of giving some concrete examples of specific use cases. Maria and Nikos, you talked about horizon scanning, not only managing what we think is coming down the pike, but maybe managing things we're not seeing around the corner using this powerful tool of AI. Don, a very good example of a use case of large volumes of seafood -- 94% of all seafood that’s imported in the United States. So, 94% of all seafood that’s consumed is imported and how we can use this tool to ensure that seafood is safe for American consumers. Then Cronin, you gave a use case of leafy greens, a recurring vehicle of foodborne disease. That's really, really encouraging to me.
The third takeaway I heard was that. Nikos, I think you persuaded me that AI can be a team sport. I'll be candid with you, when I got into this, I was thinking, well, we're going to hear about how AI can be used in my company or my specific supply chain. And what we've heard is maybe we should pause and think about it.
We often say food safety is a collaborative effort. It's a shared food system. But can we democratize big data in a way that the public and private sector share information or private-to-private sector? We have more data together and we all win on the food safety issue together. We said safety is not a competitive issue. So, number three, AI can be a team sport and I would challenge listeners to think about it a little bit unconventional like this and out of the box.
My fourth takeaway was that you need to focus on more than just technology, and Maria, I appreciate you emphasizing the human element, which is the reality is AI isn't going to replace the subject matter experts who are listening here today. We're always going to need the best and brightest food safety professionals. AI can be a powerful adjunct. And then also, if you're going to leverage it in your place of employment, it's going to probably mean you change the way you work, so engage others in your organization and not just the IT department.
And then lastly, I heard Five: It's already happening now. It's already happening now. So, let's get started. I think we heard some powerful use cases on how it's already being used in food, and we all know if you have a smart device, the power of AI in your smart device, if you're online shopping platforms, AI is used there, if you've gone to a medical health care provider, power of AI in healthcare and it's happening in food too. And so, let's get started now using AI to strengthen food safety.
Listen, I thought today's session was fabulous. I don't know about you. I want to thank all of our listeners for giving us your time. I hope we answered your questions, but rest assured there's going to be more conversations with AI and food safety in our discipline and in our future.
I also hope this podcast gave you some real practical ways and ideas on how you can add AI as a powerful tool to your safety toolbox. If you've enjoyed today's podcast, I ask you to please take a moment and visit our TechTalk podcast page on fda.gov for updates on the next episode.
Thank you all for listening! Until next time, stay safe.