Skip to main content
Category

Blog

Join LF AI & Data at Kubernetes AI Day!

By Blog

The LF AI & Data Foundation is pleased to be a co-host at the upcoming Kubernetes AI Day! The event will be held virtually on May 4, 2021, and registration is only US$20.

Kubernetes is becoming a common substrate for AI that allows for workloads to be run either in the cloud or in its own data center, and to easily scale. This event is great for developers who are interested in deploying AI at scale using Kubernetes. 

The agenda is now live! Please note the times below are displayed in Pacific Daylight Time (PDT).

Tuesday, May 4, 2021

1:00 PDT

Opening Remarks

1:05 PDT

Scaling ML pipelines with KALE — the Kubeflow Automated Pipeline Engine

1:40 PDT

A K8s Based Reference Architecture for Streaming Inference in the Wild

2:15 PDT

Embrace DevOps Practices to ML Pipeline Development

2:45 PDT

Break

3:05 PDT

Taming the Beast: Managing the day 2 operational complexity of Kubeflow

3:40 PDT

The SAME Project: A Cloud Native Approach to Reproducible Machine Learning

4:10 PDT

Break

4:25 PDT

Stand up for ethical AI! How to detect and mitigate AI bias using Kubeflow

5:00 PDT

The production daily life: An end to end experience of serverless machine learning, MLOps and models explainability

6:30 PDT

Closing Remarks

Visit the event website for more information about the schedule and speakers. Join us by registering to attend Kubernetes AI Day – Register Now!

The LF AI & Data Foundation’s mission is to build and support an open AI community, and drive open source innovation in the AI, ML, and DL domains by enabling collaboration and the creation of new opportunities for all the members of the community. 

Want to get involved with the LF AI & Data Foundation? Be sure to subscribe to our mailing lists to join the community and stay connected to the latest updates.

LF AI & Data Resources

Sparklyr 1.6 Release Now Available!

By Blog

Sparklyr, an LF AI & Data Foundation Incubation Project, has released version 1.6! Sparklyr is an R Language package that lets you analyze data in Apache Spark, the well-known engine for big data processing, while using familiar tools in R. The R Language is widely used by data scientists and statisticians around the world and is known for its advanced features in statistical computing and graphics. 

In version 1.6, sparklyr adds a variety of improvements. Highlights include:

  • Sparklyr now has an R interface for Power Iteration Clustering
    • Power Iteration Clustering is a scalable and efficient graph clustering algorithm. It finds low-dimensional embedding of a dataset using truncated power iterations on a normalized pair-wise similarity matrix of all data points, and runs k-means algorithm on the embedded representation.
  • Support for approximate weighted quantiles to `sdf_quantile()` and `ft_quantile_discretizer()`
    • Sparklyr 1.6 features a generalized version of the Greenwald-Khanna algorithm that takes weights of sample data into account when approximating quantiles of a large number of data points.
    • Similar to its unweighted counterpart, the weighted version of the Greenwald-Khanna algorithm can be executed distributively on multiple Spark worker nodes, with each worker node summarizing some partition(s) of a Spark dataframe in parallel, and quantile summaries of all partitions can be merged efficiently. The merged result can then be used to approximate weighted quantiles of the dataset, with a fixed upper bound on relative error on all approximations.
  • `spark_write_rds()` was implemented to support exporting all partitions of a Spark dataframe in parallel into RDS (version 2) files. This functionality was designed and built to avoid high memory pressure on the Spark driver node when collecting large Spark dataframes.
    • RDS files will be written to the default file system of the Spark instance (i.e., local file if the Spark instance is running locally, or a distributed file system such as HDFS if the Spark instance is deployed over a cluster of machines).
    • The resulting RDS files, once downloaded onto the local file system, should be deserialized into R dataframes using `collect_from_rds()` (which calls `readRDS()` internally and also performs some important post-processing steps to support timestamp columns, date columns, and struct columns properly in R).
  • Dplyr-related improvements:
    • Dplyr verbs such as `select`, `mutate`, and `summarize` can now work with a set of Spark dataframe columns specified by `where()` predicates (e.g.,  `sdf %>% select(where(is.numeric))` and `sdf %>% summarize(across(starts_with(“Petal”), mean))`, etc)
    • Sparklyr 1.6 implemented support for `if_all()` and `if_any()` for Spark dataframes
    • Dbplyr integration in sparklyr has been revised substantially to be compatible with both dbplyr edition 1 and edition 2 APIs

As usual, there is strong support for sparklyr from our fantastic open-source community! In chronological order, we thank the following individuals for making their pull request part of sparklyr 1.6:

To learn more about the sparklyr 1.6 release, check out the full release notes. Want to get involved with sparklyr? Be sure to join the sparklyr-Announce and sparklyr-Technical-Discuss mailing lists to join the community and stay connected on the latest updates. 

Congratulations to the sparklyr team and we look forward to continued growth and success as part of the LF AI & Data Foundation! To learn about hosting an open source project with us, visit the LF AI & Data Foundation website.

Sparklyr Key Links

LF AI & Data Resources

Thank you ONNX & Baidu Paddle Paddle for Hosting a Great LF AI & Data Day!

By Blog

A big thank you to ONNX and Baidu Paddle Paddle for hosting a great virtual meetup! The LF AI & Data Day ONNX Community Virtual Meetup was held on March 24, 2021 and was a great success with over 100 attendees joining for part of the three hour event.

The meetup included ONNX Community updates, partner/end-user stories, and SIG/WG updates. The virtual meetup was an opportunity to connect with and hear from people working with ONNX across a variety of groups. A special thank you to Ti Zhou from Baidu Paddle Paddle for working closely with the ONNX Technical Steering Committee, SIGs, and ONNX community to curate the content. 

Missed the meetup? Check out all of the presentations and recordings here.

This meetup took on a virtual format but we look forward to connecting again at another event in person soon. LF AI & Data Day is a regional, one-day event hosted and organized by local members with support from LF AI & Data, its members, and projects. If you are interested in hosting an LF AI & Data Day please email info@lfai.foundation to discuss.

ONNX, an LF AI & Data Foundation Graduated Project, is an open format to represent deep learning models. With ONNX, AI developers can more easily move models between state-of-the-art tools and choose the combination that is best for them.  Be sure to join the ONNX-Announce mailing list to join the community and stay connected on the latest updates. You can join technical discussions on GitHub and more conversations with the community on LF AI & Data ONNX Slack channels.

ONNX Resources

LF AI & Data Resources

Resources for Data Scientist and Machine Learning Professionals

By Blog

Guest Author: Matt Zand, President of High School Technology Services

Whether you are new to the field of data science or you would like to brush up on your current skills, the resources listed here would be a great help to you. For beginners, they should start by learning Python procedural coding followed by mastering Python Object-Oriented Programming (OOP). Python is a very powerful, yet easy to learn programming language. If you are familiar with the logic of programming, learning Python would be easy.

Once you master Python, you can move on to learning how to use Python for data analytics. Techniques and tools you use for running data analytics are common among data scientists that run analytics on large data sets (or so-called “Big Data”) on a daily basis.

The other two popular applications of Python are machine learning and Artificial Intelligence (AI). In short, by utilizing Python, machines will learn from their system and users and begin to replicate the business processes without any human interactions. Along similar lines, Python is used for AI by automating business processing and system routine transactions often via assistance from Internet of Things like hardware devices.

In short, the resources provided in this article can serve as great guides for those interested in pursuing a career in Data Science, Machine Learning, Big Data, Data Analytics, and AI.

Python, Machine Learning and Data Science Resources

LF AI & Data Key Links

How Explainable AI is Changing the Bank and Finance Industry

By Blog

Guest Author: Dr. Jagreet Kaur, Chief AI Officer, Xenonstack

Boost Banks Performance using XAI 

Machine Learning has automated business operations and makes them more efficient, improves services, and enriches customer interaction. But it is noticed that the AI systems are biased and discriminate while providing services based on gender, race, or ethnicity. As most advanced ML algorithms have opaque functioning, noticing biases and tracking model decisions is tough. Thus these systems lose a customer as well as banker’s trust. This issue is known as the black-box problem. 

Haunting Fraud: The use of AI and ML for haunting fraud helps to automate the task and detect fraud. But some of the cases are coming, that system misidentifies the customer and accidentally declined credit cards. Thus it disappoints the customer and loses their trust, which also has a reputational impact, and customers just stop using their services. Because the developer and bankers cannot spot whether the system works properly or not? And what is the reason that system declines a card? These mishappenings have occurred due to a lack of transparency in the system. 

Explainable AI can solve these problems by providing transparency and giving answers: 

  • How does the system decide that card should be declined? 
  • What is the reason behind individual approval or decline of a customer’s card? 

Banks and Financial institutions are investing in Explainable AI for solving these problems. We build the AI system using Explainable AI to make models transparent. Explainable AI makes model decisions more trustable. It also solves the issue of bias. 

Before/After: Before adopting Explainable AI, users can take output but do not know how it happened. But the use of Explainable AI essentially builds trust in the algorithm and helps to explain the system, so no one can say I don’t know what happened. 

Implementation: Visualization interprets the model and explains it. Various libraries and packages explain the model decision process, such as how the software reaches its conclusion. There are two dimensions of an interpretable system: 

  • Transparency helps to solve the black box model problem: It provides clarity; how does the model work?
  • Explainability helps organizations rationalize and understand AI decisions: “why did the model do that?”

Case Study to understand Explainable AI in Banks 

Banking industries have started automating their loan system using AI (Artificial Intelligence) that makes a decision or avail loan within a minute using customer’s data to predict their creditworthiness. It can decrease overdue loans, reduce credit loss and risk and decrease frauds. 

There is some cost associated with the incorrect decision of the model. Most of the models used for AI systems are black box in nature, which increases the business risk. Understanding model decisions is challenging due to a lack of transparency. 

The end customer can ask questions about the model that the developer could not answer due to opaque models; thus, it would not build the customer trust. 

Explainable AI in Loan Approval System 

Explainable AI builds customer trust by providing a transparent and clear methodology of the model. Explainable AI uses various frameworks and libraries to answer customers’ questions. Such as:

  • How is data contributing to making a decision? 
  • Which feature influences the result more? 
  • How changing the value of that particular feature affects the system output?
  • Why did the system decline the loan application of Mr. Jain? 
  • What is the income required to have for approving a loan? 
  • How do models make decisions? 

To make the model interpretable, we will divide our approach into three levels. And various questions picked from these. 

  • Global Explanation 
  • Local Explanation 
  • Feature interaction and distribution 

Some of these questions and methodologies to be used to answer those questions:

Questions of Stakeholder 

Methodology to be used 

Implementation 

Process

Is it possible to enhance model explainability without damaging model performance?

Model accuracy vs. Model Explainability

Python and Visualization

How is data contributing to making a decision?

SHAP(SHapley Additive exPlanations)

Using SHAP library

How does model output vary by changing the Income of the borrower?

PDP (Partial Dependence Plot)/ICE(Individual Conditional Expectation)

PDP box

Why did the system decline the loan application of Mr. Jain?

LIME 

LIME library

What is the income required to have for approving a loan?

Anchors 

Anchors from Alibi

How do models make decisions? 

defragTrees(For random forest)

defragTree Package

Table 1.1 

Global Level Explanation 

Question 1: How is data contributing to making a decision?

According to the model, ‘Credit history’, ‘Loan amount’ and ‘Total Income’ are the top three variables with the most impact on the application’s approval. 

The contribution of features in making decisions can help the customer trust the model. If correct parameters influence the results, it means the model works correctly. 

Figure 1.1 depicts the importance of the features in predicting the output. Features are sorted from top to bottom to decrease its weightage to make decisions.

Figure 1.1 

The probability of approval or rejection of the loan application depends on the person’s credit history. 

Q2: How is data contributing to making a decision? 

It is the next version of the previous graph and gives more insight into the model. It also shows the same things with more information about the feature’s value.  

  • Feature importance: Variables ranked in descending order of importance.
  • Impact: The horizontal location shows whether the effect of that value is associated with a higher or lower prediction.
  • Value: Color shows whether that variable is high or low for that observation. Red color devotes the high value and blue for less value. The variation in color of the dot shows the value of the feature. 
  • Correlation: The first parameter of Figure 1.2 depicts that the approval of application highly depends on credit history. Having a good credit history has more chances of approving a loan application. 

Figure 1.2 

Feature interaction and distribution 

Q3: How does model output vary by changing the borrower’s income? 

After getting the answer to the first question, the customer can ask how the change in Income changes the system output when other parameters are not changing? 

To answer this, let’s discuss the Partial Dependence Plot (PDP). PDP shows the relation between the model output and feature value where other features are marginalized. This graph shows how changing Income changes the system decision. 

Figure 1.3 

As we get an idea of the feature effect on the model decision, now we can go for a Local explanation to understand the prediction for an individual customer. 

Local Explanation 

Q4: Why did the system decline the loan application of Mr. Jain? 

Mr. Jain has applied for the loan. But the system rejects his application; now he wants to know why the system rejected his application. Using SHAP, the system justifies its result. The SHAP value represents the impact of feature evidence on the model’s output. 

Because Mr. Jain has a poor credit history, he has not repaid previous debt, and he also doesn’t have his income, the income of co-application is also low. These all factors move the system’s decision towards declining the application.

Figure 1.4 Mr. Jain’s justification 

Q5: Mr. John and Mr. Herry have almost the same parameters values, such as total income and credit history; then why did the system decline Mr. Herry’s application and approve Mr. John’s application? 

Both Mr. John and Mr. Herry have the same values for the attributes, but the AI system approves the loan application of Mr. John but not of Mr. Herry. 

To answer this question, Explainable AI uses a waterfall chart of SHAP. Let’s compare the justification for both Mr. Herry and Mr. John; it noticed that both have good credit history and values for other parameters except Income. Mr. Herry has a low salary compared to Mr. John, and thus the total income of Mr. John also decreased. That’s why the system decides that Mr. John will not return the loan, therefore, reject his application.

Figure 1.5 Mr. John’s justification 

Figure 1.6 Mr. Herry’s justification

How Explainable AI improves Bank AI systems? 

Explainable AI improves AI systems that banks are using: 

  • Build trust by providing greater visibility to spot flaws and unknown vulnerabilities. Thus assure that system operation. 
  • Improve performance by understanding how the model works and make decisions. 
  • It improves strategy and decision making as a result, also improves revenue, customer behavior, and employee turnover. 
  • Enhance control over the system. 
  • Identify mistakes and quickly work on them. 

Business Benefit of Explainable AI 

Business benefits of Explainable AI as shown in Figure 1.1: 

Figure 1.1 

Optimize

  • Model Performance: Improves and optimizes AI systems by understanding the how and why of the systems while making decisions. It verifies system outputs and enhances them by detecting bias and flaws. 
  • Decision Making: Predicting customer churn is a widespread use case of ML that can tell that customer churn rate will increase. Now, suppose to reduce the churn rate, the financial institution reduces their fee, but the exact reason for increasing churn rate can be customer service experience. Fee reduction cannot solve the problem because the main reason behind the scene is customer interaction, not the fee. Therefore to know the correct reason, Explainable AI must understand why the churn rate is increasing. 

Retain 

  • Control: It helps to retain control over AI. Visibility of AI models data and features helps identify issues(such as drift) and solve them. 
  • Safety: It tracks unethical design and works with the cyber team to safeguard against these faults. 

Maintain 

  • Ethics: With clear governance and security guards, it provides ethical consideration in their AI systems. 
  • Trust: Ensure that the algorithms make a correct decision using Explainable AI. It builds trust by strengthening the stability and predictability of interpretable models. 

Comply 

  • Accountability: For a clear understanding of an AI system’s accountability, it is essential to understand how the model operates and evolves that can be provided by only Explainable AI in the case of black-box models. 
  • Regulation: Focuses on AI areas by establishing standards for governance, accuracy, transparency, and explainability. 

Conclusion

Contribution of the Explainable AI in Loan approval AI system makes it easy for the end-user to understand the AI systems’ complex working. It provides a human-centered interface to the user. Explainability is a key to producing a transparent, proficient, and accurate AI system that can help the bankers and the borrower understand and use it.

LF AI & Data Resources

Datashim Joins LF AI & Data as New Incubation Project

By Blog

LF AI & Data Foundation—the organization building an ecosystem to sustain open source innovation in artificial intelligence (AI), machine learning (ML), deep learning (DL), and data open source projects, today is announcing Datashim as its latest Incubation Project. 

Datashim is enabling and accelerating data access for Kubernetes/Openshift workloads in a transparent and declarative way. Datashim was released and open sourced by IBM in September of 2019 and is growing to support use-cases related to data access in AI projects. 

Dr. Ibrahim Haddad, Executive Director of LF AI & Data, said: “We’re excited to welcome Datashim as our latest incubation project in LF AI & Data. The project is enabling and accelerating data access for Kubernetes/Openshift workloads in a transparent and declarative way. It offers a user-friendly experience to users who face challenges in configuring data access. We look forward to working with the community to grow the project’s footprint and to create new collaboration opportunities with our members and other hosted projects.” 

Datashim provides benefits across various entities:

  • Data Scientists/Engineers: Focus on workload/experiments development and not on configuring/tuning data access 
  • Storage Providers: Increase adoption since the framework is extensible without hindering the user experience 
  • Data-oriented Frameworks: Can build capabilities (caching, scheduling) on top of Datashim using a declarative way to access/manage data sources

Datashim supports connection to S3, NFS, HostPath-based data sources and will expand to support more in the future.

Yiannis Gkoufas, the project lead for Datashim, said: “We are very excited to be part of LF AI  & Data and look forward to making Datashim a catalyst for bigger adoption of hosted projects, especially those targeting Kubernetes environments.”

LF AI & Data supports projects via a wide range of services, and the first step is joining as an Incubation Project. LF AI & Data will support the neutral open governance for Datashim to help foster the growth of the project. Learn more about Datashim on their GitHub and be sure to join the Datashim-Announce and Datashim-Technical-Discuss mail lists to join the community and stay connected on the latest updates.

A warm welcome to Datashim! We look forward to the project’s continued growth and success as part of the LF AI & Data Foundation. To learn about how to host an open source project with us, visit the LF AI & Data website.

Datashim Key Links

LF AI & Data Resources

Flyte Joins LF AI & Data as New Incubation Project

By Blog

LF AI & Data Foundation—the organization building an ecosystem to sustain open source innovation in artificial intelligence (AI), machine learning (ML), deep learning (DL), and data open source projects, is announcing Flyte as its latest Incubation Project. 

Flyte is a production-grade, declarative, structured, and highly scalable cloud-native workflow orchestration platform. It allows users to describe their ML/Data pipelines using Python, Java, or in the future, other languages. It manages the data flow, parallelization, scaling, and orchestration of these pipelines. Flyte was released and open sourced by Lyft. 

“The LF AI & Data Foundation is focused on growing and supporting an ecosystem of open source AI and Data projects. Today’s announcement of Flyte joining the Foundation in incubation represents significant progress toward achieving our vision,” said Dr. Ibrahim Haddad,  Executive Director of LF AI & Data. “This project has proven highly effective in creating concurrent, scalable, and maintainable workflows for machine learning and data processing. We look forward to working to further grow the Flyte community and encourage adoption of this exciting project, and enable collaboration with the various other projects hosted in LF AI & Data and the open source community.”

The feature-set and extensibility of Flyte makes it extremely useful for Machine Learning pipelines like Feature Engineering, Adhoc and periodic Model Training, Model evaluation, ML monitoring, ETL, and general data processing. Flyte has extensions for Pandas, Tensorflow, Pytorch, Spark and integrates with Hosted cloud services such as AWS Sagemaker, Athena, etc.

Flyte is built to streamline the journey from idea to production for Data Scientists and ML Engineers. Its mission is to break down silos between infrastructure, data, ML, and product teams. 

Ketan Umare, the project lead for Flyte, said: “We want Flyte to become the conduit for collaborating across myriad open source projects. To achieve this, it is essential that Flyte be perceived as a truly open platform. This is sorely missing in the current open source landscape. Today contributing Flyte to The LF AI and Data Foundation is a giant step towards our goal of simplifying the life of all hardworking ML and Data Engineers.”

LF AI & Data supports projects via a wide range of services, and the first step is joining as an Incubation Project. LF AI & Data will support the neutral open governance for Flyte to help foster the growth of the project. Learn more about Flyte on their GitHub and be sure to join the Flyte-Announce and Flyte-Technical-Discuss mail lists to join the community and stay connected on the latest updates.

A warm welcome to Flyte! We look forward to the project’s continued growth and success as part of the LF AI & Data Foundation. To learn about how to host an open source project with us, visit the LF AI & Data website.

Flyte Key Links

LF AI & Data Resources

High School Technology Services Joins LF AI & Data

By Blog

Guest Author: Matt Zand, President of High School Technology Services

Why Did HSTS Join the LF AI and Data?

Matt Zand is the president of HSTS that provides high school students and adults with coding and technology training. He also is running 3 community Meetup groups (Coding and Technology Classes, JavaScript DC and Coding Bootcamps) with 4,400 members in Washington DC metro areas. Through our live events and training services, we teach students how to become a data scientist by offering courses such as Intro to Data Science, Python, and Machine Learning, and along the way, we promote best practices for data analytic design, development, and maintenance.

HSTS also collaborates with its Coding Bootcamps to offer more self-paced training related to data science and data analytics and frameworks.  Likewise, HSTS has partnered with DC Web Makers in building and implementing Artificial Intelligence (AI) and data analytic Solutions especially related to blockchain applications. Specifically, the majority of blockchain data are stored off-the-chain where data scientists will analyze and extract business insights from transaction data. In short, in the production of a blockchain application, data analysts need to run and analyze data to boost the blockchain network performance.

One of the key barriers for AI adoption is the lack of proper training as many small companies lack skilled professionals in planning, designing, and developing their AI applications. Likewise, AI economy needs skilled people from all parts of the organization to collaborate. We believe with the advent of emerging technologies like blockchain and AI, the well-made standardized AI and data analytics will play a vital role in the IT operation and management of companies. Indeed, the more companies pursue automation best practices, the more they value the importance of AI and data science. As such, to achieve this, companies need to provide their IT team with the latest training on design and development methods, and standards intended for AI and data analytics. This is how great AIs are built!

In our journey of providing the community with AI training and best practices, we write and publish many hands-on guides (articles and tutorials) on our sites. To learn more about our training services, visit https://myhsts.org/.

LF AI & Data Key Links

RREPEATS – An Introduction to the Principles for Trusted AI – Thoughts and Next Steps

By Blog

Guest Author: Susan Malaika, LF AI & Data Trusted AI Committee Member

The LF AI & Data Foundation and Trusted AI Committee are thrilled to see the progress that has been made within the group; with an emphasis on on the launch of the RREPEATS Principles. On 10 February 2021, Souad Ouali from Orange, led a webinar on RREPEATS which represent the 8 principles for Trusted AI as identified by the Principles Working Group (WG) in the Trusted AI Committee. The RREPEATS acronym captures: Reproducibility, Robustness, Equitability, Privacy, Explainability, Accountability, Transparency, and Security. 

I invite you to read along for a brief summary of the RREPEATS introduction webinar and encourage you to view the complete webinar on youtube

Introduction:

The session kicked off with Souad inviting attendees to share what words came to mind when discussing Trusted AI. A wide array of words were shared with bias being an overarching response along with explicability, transparency, robustness, and affordability. This exercise set the stage for further discussions around whether the principles are restricted to Machine Learning and if “Good Old Fashioned AI” (GOFA) was included. Souad clarified that the principles could be applied broadly which emphasized the importance of this topic. 

Discussion Takeaways:

The webinar included both positive and thought provoking questions and discussions; below are a just a few takeaways:

  • Alejandro Saucedo (The Institute for Ethical AI / Seldon) and Francois Jezequel (Orange) joined Souad in emphasizing that the principles provide direction for processes and people that provide AI.
  • We can drill down into tools that support the principles in various categories such as tools for datasets and models, for particular use cases and domain experts, e.g., to explore intended and unintended bias when applying the principle of equitability.
  • Typical use cases are adopted as illustrations for particular principles such as credit risk assessment for equitability, and self driving cars for accountability.
  • There is a relationship between pairs of principles such as equitability (identifying bias) and accountability (being responsible and to answer for what happened); along with a relationship between explainability and transparency. The latter principles are connected with being easy to understand, making it possible to foster trust. And…
  • Trust is the foundation of relationships. Shared goals can only be achieved if there is trust. Trust (and the principles of explainability and transparency) are necessary for the increased adoption of AI. 

Souad shared the process which the Principles WG utilized in the creation of RREPEATS which included the review of many documents including content from OECD, EU, SoA, ACM, IEEE, and DoD. It was clear that there were many minds involved and that there was still work to be done as the effort continues to evolve. 

Conclusion and Follow-On:

Souad wrapped up the discussion by pointing out that AI is a tool just like any other tool; and that in itself AI is neither good or evil. She shared an analogy with a knife which can be used for good to prepare vegetables but can also be used for evil to kill. She added that evil is not in the tool but in the person using the tool. Souad closed by stating that the principles for Trusted AI provide a guide for Al and its application. And that the challenge is how to apply the principles across the entire life-cycle for AI which will be discussed  in the next webinar. 

Please join us on 28 April 2021 for the next session in this webinar series: The Trusted AI Principles – Practical Examples. Register here!

In this session, you will learn more about the application of the RREPEATS Principles for two practical examples –

  • Classification of Encrypted Traffic Application – Iman Akbari Azirani & Noura Limam, University of Waterloo; Bertrand Mathieu, Orange Labs, France
  • RosaeNLG Framework (an LF AI & Data project) – Ludan Stoecklé, CTO of Data & AI Lab BNP Paribas CIB and author of RosaeNLG

The session will also include a discussion with Calvin Lawrence, CTO & Distinguished Engineer Cognitive Solutions at IBM, Alejandro Saucedo, Engineering Director at Seldon, Chief Scientist at The Institute for Ethical AI, & Emilie Sirvent-Hien, Responsible AI program manager at Orange.

Stay connected with the Trusted AI Committee by joining the mailing list here and join us for an upcoming meeting! Learn more here.

LF AI & Data Key Links

Join LF AI & Data at LF Energy Spring Summit 2021

By Blog

LF AI & Data is pleased to be a sponsor at the upcoming LF Energy Spring Summit 2021! The event will be held virtually in two segments on April 14, 2021, and registration is only US$50.

Segment 1

6:00 am – 12:00 pm PDT | 9:00 am – 3:00 pm EDT

3:00 – 9:00 pm CEST | 6:30 pm (April 14) – 12:30 am (April 15) IST

Segment 2

3:00 – 8:30 pm PDT | 6:00 – 11:30 am HKT (April 15)

7:00 am – 12:30 pm JST (April 15) | 8:00 am – 1:30 pm AEST (April 15)

Attend our Keynote

We invite you to join Dr. Ibrahim Haddad, the LF AI & Data Foundation Executive Director, for our LF AI & Data Keynote: Towards an Open Software Defined Power System Fueled By Open Source and AI.

Since the early 2000s, industries have been going through their digital transformation moving away from proprietary hardware and software systems to open designs and open source software. The Energy sector is next and ripe for that transformation leading the way to open software defined grid. 

In this talk, Dr. Haddad will briefly present how companies in various industries have led this transformation and focus on the role of open source and AI in future Energy and Power Systems planning and execution.  

The time of the keynote will be announced soon. Please visit the LF Energy Spring Summit 2021 website for more details. 

Join us by registering to attend LF Energy Spring Summit 2021 – Register Now!

The LF AI & Data Foundation mission is to build and support an open AI community, and drive open source innovation in the AI, ML, and DL domains by enabling collaboration and the creation of new opportunities for all the members of the community. 

Want to get involved with the LF AI & Data Foundation? Be sure to subscribe to our mailing lists to join the community and stay connected on the latest updates.

LF AI & Data Key Links