The Mozilla Thunderbird BlogThe New Thunderbird Website Has Hatched

Thunderbird.net has a new look, but the improvements go beyond that. We wanted a website where you could quickly find the information you need, from support to contribution, in clear and easy to understand text. While staying grateful to the many amazing contributors who have helped build and maintain our website over the past 20 years, we wanted to refresh our information along with our look. Finally, we wanted to partner with Freehive’s Ryan Gorley for their sleek, cohesive design vision and commitment to open source.

We wanted a website that’s ready for the next 20 years of Thunderbird, including the upcoming arrival of Thunderbird on mobile devices. But you don’t have to wait for that future to experience the new website now.

The New Thunderbird.net

The new, more organized framework starts with the refreshed Home page. All the great content you’ve relied on is still here, just easier to find! The expanded navigation menu makes it almost effortless to find the information and resources you need.

Resources provide a quick link to all the news and updates in the Thunderbird Blog and the unmatched community assistance in Mozilla Support, aka SUMO. Release notes are linked from the download and other options page. That page has also been simplified while still maintaining all the usual options. It’s now the main way to get links to download Beta and Daily, and in the future any other apps or versions we produce.

The About section introduces the values and the people behind the Thunderbird project, which includes our growing MZLA team. Our contact page connects you with the right community resources or team member, no matter your question or concern. And if you’d like to join us, or just see what positions are open, you’ll find a link to our career page here.

Whether it’s giving your time and skill or making a financial donation, it’s easy to discover all the ways to contribute to the project. Our new and improved Participate page shows how to get involved, from coding and testing to everyday advocacy. No matter your talents and experience, everyone can contribute!

If you want to download the latest stable release, or to donate and help bring Thunderbird everywhere, those options are still an easy click from the navigation menu.

Your Feedback

We’d love to have your thoughts and feedback on the new website. Is there a new and improved section you love? Is there something we missed? Let us know in the comments below. Want to see all the changes we made? Check the repository for the detailed commit log.

The post The New Thunderbird Website Has Hatched appeared first on The Thunderbird Blog.

SUMO BlogKitsune Release Notes – May 15, 2024

See full platform release notes on GitHub

New

Description of new features, how it benefits the user, and any relevant details.

  • Group messaging: Staff group members can send messages to groups as well as individual users.
  • Staff group permissions: We are now using a user’s membership in the Staff group rather than the user’s is_staff attribute to determine elevated privileges like being able to send messages to groups or seeing restricted KB articles
  • In-product link on article page: You’ll now see an indicator on the KB article page for articles that are the target of in-product links. This is visible to users in the Staff group.

Screenshot of the in-product indicator in a KB article

Changed

Explanation of the enhancements or changes to existing features, including performance improvements, user interface changes, etc.

  • Conversion from GA3 to GA4 data API for gathering Google Analytics data: We recently migrated SUMO’s Google Analytics (GA) from GA3 to GA4. This has temporarily impacted our access to historical data on the SUMO KB Dashboard. Data will now be pulled from GA4, which only has data since April 10, 2024. The number of “Visits” for the “Last 90 days” and “Last year” will only reflect the data gathered since this date. Stay tuned for additional dashboard updates, including the inclusion of GA3 data.

Screenshot of the Knowledge Base Dashboard in SUMO

Screenshot of how the new SUMO inbox looks like

  • Removed New Contributors link from the Contributor Tools: Discussions section of the top main menu (#1746)

Fixed

Brief description of the bug and how it was fixed, possibly including affected components.

 

The Mozilla BlogWhy I’m Joining Mozilla as Executive Director

Delight — absolute delight — is what I felt when my parents brought home a Compaq Deskpro 386 for us to play with. It was love at first sight, thanks to games like Reader Rabbit, but I fell especially hard once we had a machine connected to the Internet. The unparalleled joy that comes from making things with and for other people was intoxicating. I can’t tell you how many hours were spent building Geocities websites for friends, poring over message boards, writing X-Files fan fiction, exchanging inside jokes and song lyrics on AIM and ICQ chats with friends and far-flung cousins across the world. 

Actually, I could tell you. In detail. But it would be embarrassing. 

Years later I would learn that the ability to share, connect, and create is rooted in how the Internet works differently than the media preceding it. The Internet speaks standards and protocols. It links instead of copying. Its nature is open. You don’t need permission to make something on the Internet. That freedom holds enormous potential: At its best, it helps us explore history we didn’t know, build movements to better the future, or make a meme to brighten someone’s day. At its best, the Internet lets us see each other. 

That magic — this power — is revolutionary. Protecting it, celebrating it, and expanding it is why I’m so excited to join the Mozilla Foundation as its executive director

I started my career as a media lawyer to protect those who made things that helped us see one another, and the truth about our shared world. Almost fifteen years ago, I co-founded and built a media law clinic to train others to do the same. After a stint at a law firm, I joined BuzzFeed as its first newsroom lawyer, which felt sort of like being a lawyer for the silliest and most serious parts of the internet all at the same time. In other words, I was a lawyer for the Internet at its best.

I am not naive about the Internet at its worst. From the Edward Snowden disclosures to a quick trip to Guantanamo Bay, Cuba, much of my career has confronted issues of surveillance — including of my own religious community. I watched as consumers became more concerned about surveillance and other harms online, and so we built an accountability journalism outlet, The Markup, to serve those needs. The Markup’s mission is to help people challenge technology to serve the public good, which intentionally centers human agency. So we didn’t just write articles: Our team imagined and made things people used to make informed choices. Blacklight, for example, empowers people to use the Web how they want, by helping them see the otherwise invisible set of tracking tools, watching them as they browse. 

The through-line of my career has been grappling with how technology can uplift or stifle human agency. I choose the former. I bet you do too. 

This, of course, brings me back to the Mozilla Foundation. In our particular moment – as we’re deploying large-scale AI systems for the first time, as we’re waking up home pages from their long rests, and trying to “rewild” the Internet beyond walled gardens – I can think of no other place that has the ability, to help people shape technology to achieve their goals on their own terms. And there is no more important time. 

After all, the world we live in now was once someone’s imagination. Someone dreamt, and then many someones built, the Internet, and democracy, and other wild-eyed ideas too. We can imagine a future that centers human agency, and then we can build it, bit-by-byte. In this wildly unpredictable moment in 2024, it certainly feels like it’s up for grabs as to whether technology will be used to liberate us or shackle us. But that also means it’s up to us – if we act now. 

With your help, together we can imagine and create the Internet we want. Not what Zuckerberg, Pichai, Musk, or any other tech titan wants – we can imagine and make what you want, on your own terms. Making things on your own terms is a team sport, and that’s why I’m especially thrilled to be joining Laura Chambers (CEO, Mozilla Corporation), Moez Draief (Managing Director, Mozilla.ai), Mohamed Nanabhay (Managing Partner, Mozilla Ventures), Mitchell Baker (Executive Chair of the Board), and Mark Surman (President, Mozilla) as part of Mozilla’s senior leadership team.

Technology’s come a long way since that Compaq, and it’s moving faster than ever before. My young boys won’t experience the Internet through Geocities or X-Files fan fiction or dial-up modems (probably?).* But it’s my mission to make sure they – and all of us – do have the sense of delight I felt at the dawn of our connected age: The unparalleled joy that comes from making things with and for other people.

Always yours,

Nabiha

*They will, however, have Pikachu. There’s always Pikachu. https://images.app.goo.gl/MgVJXismZaT7RtC86 

**There’s an important corollary to all this. I (and we at Mozilla) don’t have all the good ideas. We never will. So, consider my inbox to be yours. Got an idea? Let’s talk: hi-nabiha@mozillafoundation.org

The post Why I’m Joining Mozilla as Executive Director  appeared first on The Mozilla Blog.

The Mozilla BlogMozilla Foundation Welcomes Nabiha Syed as Executive Director

Public interest tech advocate will harness collective power to deepen Mozilla’s focus on trustworthy AI

Today, Mozilla Foundation is proud to announce Nabiha Syed — media executive, lawyer, and champion of public interest technology — as its Executive Director. Syed joins Mozilla from The Markup, where she was chief executive officer. 

As technology companies, civil society, and governments race to keep up with the rapid pace of AI innovation, Syed will lead Mozilla’s advocacy and philanthropy programs to serve the public interest. Mozilla, with Syed’s leadership, will carry forward the Foundation’s nuanced, practical perspective to help steer society away from the real risks and toward the benefits of AI. 

“Nabiha has an exceptional understanding of how technology, humanity and broader society intersect — and how to engage with the complicated challenges and opportunities at that intersection,” said Mark Surman, Mozilla Foundation President. “Nabiha will make Mozilla a stronger, bigger, and more impactful organization, at a time when the internet needs it most.”

Syed is known for her mission-driven leadership, focused on increasing transparency into the most powerful institutions in society. She comes to Mozilla after leading The Markup, an award-winning publication that challenges technology to serve the public good, from its launch through its successful acquisition in 2024. The Markup drove Congressional debates, inspired watershed litigation, and won multiple prestigious awards including Fast Company’s “Most Innovative,” along with the Edward R. Murrow, National Press Club, and Scripps Howard prizes. 

“The through-line of my career has been grappling with how technology can uplift or stifle human agency,” said Nabiha Syed, incoming Mozilla Foundation Executive Director. “After all, the technology we have now was once just someone’s imagination. We can dream, build, and demand technology that serves all of us, not just the powerful few. Mozilla is the perfect place to make that happen.” 

As Executive Director, Syed will oversee a staff of more than 100 full-time employees and an annual budget of $30 million. She joins Mozilla at a time of growth and ambitious leadership: Mozilla is rapidly expanding its investment in building a movement for trustworthy AI through grantmaking, campaigning, and research. The Mozilla portfolio has also grown to include a venture capital arm and a commercial AI R+D lab

Prior to The Markup, Syed was a highly acclaimed media lawyer. Syed’s legal career spanned private practice, the New York Times First Amendment Fellowship, and leading BuzzFeed’s libel and newsgathering matters, including the successful defense of the Steele Dossier litigations. She sits on the board of the Scott Trust, the $1B+ British company that owns The Guardian newspaper, the New York Civil Liberties Union, the Reporters Committee for the Freedom of the Press, Upturn, the New Press, and serves as an advisor to ex/ante, the first venture fund dedicated to agentic tech.  

Syed is widely sought after for her views on technology and media law, and has briefed two sitting presidents on free speech matters as well as diverse audiences including the World Economic Forum, annual investor meetings, Stanford, Wharton, and Columbia, where she is a lecturer.

She has been recognized with numerous awards, including as a 40 Under 40 Rising Star by the New York Law Journal, Crain’s New York Business 40 under 40 award, a Rising Star award from the Reporter’s Committee for Freedom of the Press. Syed was selected to be on the National Commission for US-China Relations, and was recognized by Forbes as one of the best emerging free speech lawyers. 

Syed holds a J.D. from Yale Law School, an M.St from the University of Oxford where she was a Marshall Scholar, and a B.A from Johns Hopkins University. She lives in Brooklyn with her husband and her two young boys.

Also read:

Why I’m Joining Mozilla as Executive Director, by Nabiha Syed 

Growing Our Movement — and Growing Mozilla — to Shape the AI Era, by Mark Surman

The post Mozilla Foundation Welcomes Nabiha Syed as Executive Director appeared first on The Mozilla Blog.

The Mozilla BlogGrowing Our Movement — and Growing Mozilla — to Shape the AI Era

Last August, we announced that Mozilla was seeking a new executive director to lead its movement building arm. I’m excited to announce that Nabiha Syed — media executive, lawyer, and champion of public interest technology — is joining us to take on this role. 

I’ve gotten to know — and admire — Nabiha over the last few years in her role as the chief executive officer of The Markup. I’ve been impressed by her thinking on how technology, humanity and society intersect — and the way she has used journalism and research to uncover the challenges and opportunities we face in the AI era. 

As we talked about the executive director role, I also found a thought partner who sees the potential to combine the ‘market’ and ‘movement’ sides of Mozilla’s personality to shape how the tech universe works. I am convinced that Nabiha will make us a stronger, bigger and more impactful organization, at a time when the internet needs it most.

Nabiha will take over leadership of Mozilla Foundation’s $30M/year portfolio of movement building programs starting on July 1. Her first task will be to supercharge the Foundation’s trustworthy AI efforts, with an initial focus on:

  • Partnering with other public interest organizations to shift the narrative on AI.
  • Creating — and funding — open source and community-driven data sets, tools, and research.
  • Growing a global community of talent committed to building responsible and trustworthy tech. 

She will take on the responsibility for all of Mozilla’s philanthropic and advocacy programs, and will lead fundraising for our charitable initiatives.

It’s important to note: Nabiha’s appointment is part of a broader effort to build new leadership that can take Mozilla into its next chapter. She joins Laura Chambers (CEO, Mozilla Corporation), Moez Draief (Managing Director, Mozilla.ai), Mohamed Nanabhay (Managing Partner, Mozilla Ventures) as well as Mitchell Baker (Executive Chair of Mozilla Corporation) and I, as part of the senior leadership team charged with advancing the Mozilla Manifesto in the AI era. 

As Nabiha joins, I will be moving full-time to the role of Mozilla Foundation President, focusing even more deeply on the growth, cohesion and sustainability of the overall Mozilla portfolio of organizations. This includes further work with Mitchell and our Boards to develop a clear roadmap for Mozilla’s next chapter — with a particular focus on the role Mozilla can play in AI. It also includes support for senior leaders at Mozilla.ai and Mozilla Ventures — our two newest entities — as well as Mozilla’s new Global Head of Public Policy, Linda Griffin

This is an exciting and pivotal moment — for Mozilla, the internet and the world. More and more people are realizing the need for tech products that are designed to be trustworthy, empowering and delightful — and for a movement that mobilizes people to reclaim the internet and ownership over their digital lives. We have a chance to build these things right now, and to reshape the relationship between technology and humanity for the better. I’m so glad Nabiha has joined us to make this happen. Welcome!

The post Growing Our Movement — and Growing Mozilla — to Shape the AI Era appeared first on The Mozilla Blog.

Mozilla Add-ons BlogManifest V3 Updates

Greetings add-on developers! We wanted to provide an update on some exciting engineering work planned for the next few Firefox releases in support of Manifest V3. The team continues to implement API changes that were previously defined in agreement with other browser vendors that participate in the WECG, ahead of Chrome’s MV2 deprecation. Another top area of focus has been around addressing some developer and end user friction related to MV3 host permissions.

The table below details some MV3 changes that are going to be available in the Firefox release channel soon.

Version Manifest V3 engineering updates Nightly Beta Release
126 Chrome extension porting API enhancements:

3/18 4/15 5/14
127 Updating MV3 host permissions on both desktop and mobile. 4/15 5/13 6/11
128 Implementing the UI necessary to control optional permissions and supporting host permissions on Android that landed in 127. 5/13 6/10 7/9

The Chrome extension porting API work that will land beginning in 126 will help ensure a higher level of compatibility and reduce friction for add-on developers supporting multiple browsers.

Beginning with Firefox 127, users will be prompted to grant MV3 host permissions as part of the install flow (similar to MV2 extensions). We’re excited to deliver this work as based on feedback from Firefox users and extension developers, this has been a major hurdle for MV3 extensions in Firefox.

However, unlike the host permission granted at install time for MV2 extensions, MV3 host permissions can still be revoked by the user at any time from the about:addons page on Firefox Desktop. Given that, MV3 extensions should still leverage the permissions API to ensure that the permissions required are already granted.

Lastly, in Firefox for Android 128, the Add-ons Manager will include a new permissions UI as shown below — this new UI will allow users to do the same as above on Firefox for Android with regards to host permissions, while also granting or revoking other optional permissions on MV2 and MV3 extensions.

                             

We also wanted to take this opportunity to address a couple common questions we’ve been seeing in the community, specifically around the webRequest API and MV2:

  1. The webRequest API is not on a deprecation path in Firefox
  2. Mozilla has no current plans to deprecate MV2 as mentioned in our previous MV3 update

For more information on adopting MV3, please see our migration guide. Another great resource is the FOSDEM presentation a couple Mozilla engineers delivered recently, Firefox, Android, and Cross-browser WebExtensions in 2024.

If you have questions or feedback on our Manifest V3 plans we would love to hear from you in the comments section below or if you prefer, drop us an email.

The post Manifest V3 Updates appeared first on Mozilla Add-ons Community Blog.

The Mozilla BlogFirefox at the Webbys: Winners talk internet red flags and what they’d rather keep private online

A big screen reads: 28th Annual Webby Awards<figcaption class="wp-element-caption">Credit: Getty Images for the Webby Awards</figcaption>

The Firefox team hit the red carpet Monday at this year’s 28th annual Webby Awards with some of the internet’s most influential figures and their groundbreaking projects. But we weren’t just there to watch the honorees accept their trophies. We wanted the inside scoop on how they win the web game every day. 

So, we asked them about internet red flags and even threw down a challenge called “Unload or Private Mode,” where they had a choice: spill the beans or take a “Firefox shot” to keep it private. Check out the video below to see how Webby winners like Madison Tevlin, Abi Marquez, James and Oliver Phelps, Michelle Buteau and more responded:

The Webbys are hosted each year by the International Academy of Digital Arts and Sciences — a group of over 3,000 tech experts, industry leaders, and creative minds. Each category honors two achievements: The Webby Award, chosen by the Academy, and The Webby People’s Voice Award, which is voted on by the global internet community. It’s possible for nominees to win one or both. 

Monday’s ceremony featured notable guests like Keke Palmer, Coco Rocha, Ina Garten, Julia Louis-Dreyfus and Laverne Cox, as well as tech journalist Kara Swisher, who was honored with the Webby Lifetime Achievement Award. 

Kara Swisher accepts an award on stage.<figcaption class="wp-element-caption">Kara Swisher accepts her Webby Lifetime Achievement Award. Credit: Getty Images for the Webby Awards</figcaption>

The Webbys have evolved with the internet since the award’s inception in 1996, adding to its roster of acknowledgments like Podcasts; Games and AI, Metaverse & Virtual; and more. And just as the web is a critical tool for every area of life today, the Webby Awards remains an important and relevant award honoring achievement in interactive media.

A hallmark feature is the ceremony’s five-word acceptance speech limit, which has produced some memorable moments from the likes of David Bowie and Prince over the years. Monday night’s speeches didn’t disappoint. Here are some of our favorite speeches: 

  • “Cooking Show Pretend, Gratitude Real.” – Jennifer Garner
  • “Don’t put twinkies on pizza.” – Josh Scherer
  • “Actually, we are all one degree.” – Kevin Bacon
  • “I ain’t done, tech bros.” Kara Swisher  
  • “I’m blessed to do this.” – Keke Palmer
  • “Risk everything every time.” – Jerrod Carmichael
  •  “It’s fun proving people wrong.” – Madison Tevlin
  • “Healing, collective trauma, necessary, possible.” – Laverne Cox

Check out some other highlights:

Keke Palmer accepts an award on stage.<figcaption class="wp-element-caption">Keke Palmer accepts the Webby Award for Special Achievement. Credit: Getty Images for the Webby Awards</figcaption>
Julia Louis-Dreyfus accepts an award on stage.<figcaption class="wp-element-caption">Julia Louis-Dreyfus accepts the Webby Podcast of the Year Award. Credit: Getty Images for the Webby Awards</figcaption>
Shannon Sharpe accepts an award on stage.<figcaption class="wp-element-caption">Shannon Sharpe accepts his Webby Advocate of the Year Award. Credit: Getty Images for the Webby Awards</figcaption>
<figcaption class="wp-element-caption">Creator Abi Marquez accepts her Webby Award. Credit: Getty Images for the Webby Awards</figcaption>

See all the best moments from last night’s show on social media by searching #Webbys and at webbyawards.com. For the full list of Webby Award winners, visit winners.webbyawards.com/winners

That’s a wrap on our Webby Awards coverage! Keep hanging with us and we’ll help you navigate the web safely and freely, having a little fun along the way. 

Get Firefox

Get the browser that protects what’s important

The post Firefox at the Webbys: Winners talk internet red flags and what they’d rather keep private online appeared first on The Mozilla Blog.

The Mozilla BlogSee what’s changing in Firefox: Better insights, same privacy

An illustration shows the Firefox logo, a fox curled up in a circle.

Innovation and privacy go hand in hand here at Mozilla. To continue developing features and products that resonate with our users, we’re adopting a new approach to better understand how you engage with Firefox. Rest assured, the way we gather these insights will always put user privacy first.

What’s new in Firefox’s approach to search data 

To improve Firefox based on your needs, understanding how users interact with essential functions like search is key. We’re ramping up our efforts to enhance search experience by developing new features like Firefox Suggest, which provides recommended online content that corresponds to queries. To make sure that features like this work well, we need better insights on overall search activity – all without trading off on our commitment to user privacy. Our goal is to understand what types of searches are happening so that we can prioritize the correct features by use case.

With the latest version of Firefox for U.S. desktop users, we’re introducing a new way to measure search activity broken down into high level categories. This measure is not linked with specific individuals and is further anonymized using a technology called OHTTP to ensure it can’t be connected with user IP addresses.    

Let’s say you’re using Firefox to plan a trip to Spain and search for “Barcelona hotels.” Firefox infers that the search results fall under the category of “travel,” and it increments a counter to calculate the total number of searches happening at the country level.

Here’s the current list of categories we’re using: animals, arts, autos, business, career, education, fashion, finance, food, government, health, hobbies, home, inconclusive, news, real estate, society, sports, tech and travel.

Having an understanding of what types of searches happen most frequently will give us a better understanding of what’s important to our users, without giving us additional insight into individual browsing preferences. This helps us take a step forward in providing a browsing experience that is more tailored to your needs, without us stepping away from the principles that make us who we are. 

What Firefox’s search data collection means for you

We understand that any new data collection might spark some questions. Simply put, this new method only categorizes the websites that show up in your searches — not the specifics of what you’re personally looking up. 

Sensitive topics, like searching for particular health care services, are categorized only under broad terms like health or society. Your search activities are handled with the same level of confidentiality as all other data regardless of any local laws surrounding certain health services. 

Remember, you can always opt out of sending any technical or usage data to Firefox. Here’s a step-by-step guide on how to adjust your settings. We also don’t collect category data when you use Private Browsing mode on Firefox.  

As far as user experience goes, you won’t see any visible changes in your browsing. Our new approach to data will just enable us to better refine our product features and offerings in ways that matter to you. 

We’re here to make the internet safer, faster and more in tune with what you need – just as we have since open-sourcing our browser code more than 25 years ago. Thanks for being part of our journey!

Get Firefox

Get the browser that protects what’s important

The post See what’s changing in Firefox: Better insights, same privacy appeared first on The Mozilla Blog.

The Mozilla BlogRaphael Mimoun on creating tech for human rights and justice, combatting misinformation and building a privacy-centric culture

At Mozilla, we know we can’t create a better future alone, that is why each year we will be highlighting the work of 25 digital leaders using technology to amplify voices, effect change, and build new technologies globally through our Rise 25 Awards. These storytellers, innovators, activists, advocates. builders and artists are helping make the internet more diverse, ethical, responsible and inclusive.

This week, we chatted with Raphael Mimoun, a builder dedicated to making tools that empower journalists and human rights defenders. We talk with Raphael about the launch of his app, Tella, combatting misinformation online, the future of social media platforms and more.

How did the work you did early on in human rights after you completed university help you understand the power of technology and ultimately inspire you to do a lot of the work that you do right now?

Raphael Mimoun: So I never worked in tech per se and only developed a passion for technology as I was working in human rights. It was really a time when, basically, the power of technology to support movements and to head movements around the world was kind of getting fully understood. You had the Arab Spring, you had Occupy Wall Street, you had all of these movements for social justice, for democracy, for human rights, that were very much kind of spread through technology, right? Technology played a very, very important role. But just after that, it was kind of like a hangover where we all realized, “OK, it’s not just all good and fine.” You also have the flip side, which is government spying on the citizens, identifying citizens through social media, through hacking, and so on and so forth — harassing them, repressing them online, but translating into offline violence, repression, and so on. And so I think that was the moment where I was like, “OK, there is something that needs to be done around technology,” specifically for those people who are on the front lines because if we just treat it as a tool — one of those neutral tools — we end up getting very vulnerable to violence, and it can be from the state, it can also be from online mobs, armed groups, all sort of things. So that was really the point when I was like, “OK, let’s try and tackle technology as its own thing.” Not just thinking of it as a neutral tool that can help or not.

There’s so much misinformation out there now that it’s so much harder to tell the difference between what’s real and fake news. Twitter was such a reliable tool of information before, but that’s changed. Do you think that any of these other platforms can be able to help make up for so much of the misinformation that is out there?

I think we all feel the weight of that loss of losing Twitter. Twitter was always a large corporation, partially owned by a billionaire. It was never kind of a community tool, but there was still an ethos, right? Like a philosophy, or the values of the platform were still very much like community-oriented, right? It was that place for activists and human rights defenders and journalists and communities in general to voice their opinions. So I think that loss was very hard on all of us.

I see a lot of misinformation on Instagram as well. There is very little moderation there. It’s also all visual, so if you want traction, you’re going to try to put something that is very spectacular that is very eye catchy, and so I think that leads to even more misinformation.

I am pretty optimistic about some of the alternatives that have popped up since Twitter’s downfall. Mastodon actually blew up after Twitter, but it’s much older — I think it’s 10 years old by now. And there’s Bluesky. So I think those two are building up, and they offer spaces that are much more decentralized with much more autonomy and agency to users. You are more likely to be able to customize your feeds. You are more likely to have tools for your own safety online, right? All of those different things that I feel like you could never get on Threads, on Instagram or on Twitter, or anything like that. I’m hoping it’s actually going to be able to recreate the community that is very much what Twitter was. It’s never going to be exactly the same thing, but I’m hoping we will get there. And I think the fact that it is decentralized, open source and with very much a philosophy of agency and autonomy is going to lead us to a place where these social networks can’t actually be taken over by a power hungry billionaire.

What do you think is the biggest challenge that we face in the world this year on and offline, and then how do you think we can combat it?

I don’t know if that’s the biggest challenge, but one of the really big challenges that we’re seeing is how the digital is meeting real life and how people who are active online or on the phone on the computer are getting repressed for that work in real life. So we developed an app called Tella, which encrypts and hides files on your phone, right? So you take a photo or a video of a demonstration or police violence, or whatever it is, and then if the police tries to catch you and grab your phone to delete it, they won’t be able to find it, or at least it will be much more difficult to find it. Or it would be uploaded already. And things like that, I think is one of the big things that we’re seeing again. I don’t know if that the biggest challenge online at the moment, but one of the big things we’re seeing is just that it’s becoming completely normalized to grab someone’s phone or check someone’s computer at the airport, or at the border, in the street and go through it without any form of accountability. People have no idea what the regulations are, what the rules are, what’s allowed, what’s not allowed. And when they abuse those powers, is there any recourse? Most places in the world, at least, where we are working, there is definitely no recourse. And so I think that connection between thinking you’re just taking a photo for social media but actually the repercussion is so real because you’re going to have someone take your phone, and maybe they’re going to delete the photo, or maybe they’re going to detain you. Or maybe they’re going to beat you up — like all of those different things. I think this is one of the big challenges that we’re seeing at the moment, and something that isn’t traditionally thought of as an internet issue or an online digital rights issue because it’s someone taking a physical device and looking through it. It often gets overlooked, and then we don’t have much kind of advocacy around it, or anything like that.

<figcaption class="wp-element-caption">Raphael Mimoun at Mozilla’s Rise25 award ceremony in October 2023.</figcaption>

How is this issue overseas compared to America?

It really depends on where in each country, but many places where we work, we work with human rights defenders who on the front lines, and journalists who are on the front lines in places that are very repressive. So there is no form of accountability whatsoever. They can take your phone again. It depends on where, but they can take your phone, put it into the trash, and you’ll never see it again. And you have no recourse whatsoever. It’s not like you can go to the police because they laugh at you and say, “What the hell are you doing here?” 

What do you think is one action everybody can take to make the world and our lives online a little bit better?

I think social media has a lot of negative consequences for everyone’s mental health and many other things, but for people who are active and who want to be active, consider social networks that are open source, privacy-friendly and decentralized. Bluesky, the Fediverse —including Mastodon — are examples because I think it’s our responsibility to kind of build up a community there, so we can move away from those social media platforms that are owned by either billionaires or massive corporations, who only want to extract value from us and who spy on us and who censor us. And I feel like if everyone committed to being active on those social media platforms — one way of doing that is just having an account, and whatever you post on one, you just post on the other — I feel like that’s one thing that can make a big difference in the long run.

We started Rise25 to celebrate Mozilla’s 25th anniversary. What do you hope that people are celebrating in the next 25 years?

I was talking a little bit earlier about how we are building a culture that is more privacy-centric, like people are becoming aware, becoming wary about all these things happening to the data, the identity, and so on. And I do think we are at a turning point in terms of the technology that’s available to us, the practices and what we need as users to maintain our privacy and our security.  I feel like in honestly not even 25, I think in 10 years, if things go well — which it’s hard to know in this field — and if we keep on building what we already are building, I can see how we will have an internet that is a lot more privacy-centric where communications are by default are private. Where end-to-end encryption is ubiquitous in our communication, in our emailing. Where social media isn’t extractive and people have actual ownership and agency in the social network networks they use. Where data mining is no longer a thing. I feel like overall, I can see how the infrastructure is now getting built, and that in 10,15 or 25 years, we will be in a place where we can use the internet without having to constantly watch over our shoulder to see if someone is spying on us or seeing who has access and all of those things.

Lastly, what gives you hope about the future of our world?

That people are not getting complacent and that it is always people who are standing up to fight back. We’re seeing it at. We saw it at Google with people standing up as part of No Tech for Apartheid coalition and people losing the jobs. We’re seeing it on university campuses around the country. We’re seeing it on the streets. People fight back. That’s where any change has ever come from: the bottom up. I think now, more than ever, people are willing to put something on the line to make sure that they defend their rights. So I think that really gives me hope.

Get Firefox

Get the browser that protects what’s important

The post Raphael Mimoun on creating tech for human rights and justice, combatting misinformation and building a privacy-centric culture appeared first on The Mozilla Blog.

The Mozilla BlogKeoni Mahelona on promoting Indigenous communities, the evolution of the Fediverse and data protection

At Mozilla, we know we can’t create a better future alone, that is why each year we will be highlighting the work of 25 digital leaders using technology to amplify voices, effect change, and build new technologies globally through our Rise 25 Awards. These storytellers, innovators, activists, advocates, builders and artists are helping make the internet more diverse, ethical, responsible and inclusive.

This week, we chatted with Keoni Mahelona, a builder behind technologies that aim to protect and promote Indigenous languages and knowledge. We talked with Keoni about his current work at Te Hiku Media, the challenges of preserving Indigenous cultures, big tech and more.

So first off, what inspired you to do the work you’re doing now with Te Hiku Media?

Mahelona: I sort of started at the organization cause my partner, who’s the CEO, needed help with doing a website. But then the website turned into an entire digital platform, which then turned into building AI to help us do the work that we have to do, but I guess the most important thing is the alignment of values with like me as a person and as a native Hawaiian with the values of the community up here — Māori community and the organization. Having the strong desire for sovereignty for our land, which has been the struggle we’ve been having now for hundreds of years. We’re still trying to get that back, both in Aotearoa and in Hawaii, but also sovereignty for our languages and our data, and pretty much everything that encompasses us in our communities. And it was really clear that the work that we do at Te Hiku is very important for the community, but also that we needed to maintain sovereignty over that work. And if we made the wrong choices with how we store our data, where we put our data, what platforms we use, then we would cede some of that sovereignty over and take us further back rather than forward.

What were (and are) some of those challenges that you guys had to overcome to be able to create those tools? I feel like a lot of people might not know those challenges and how you have to persevere through those things to create, to preserve.

Sure, the lack of data is a challenge that big tech seem to overcome quite easily with their billions of dollars, whether they’re stealing it at scale or paying people for it at scale. They have the resources to do that and litigate if they need to, because of theft, and they’re just doing what America did right? Stole our land at scale. So for us, actually, we knew that the data would be the hardest part, but not so much like getting the data, or whether the data existed — there’s a vibrant community of language speakers here — the hard part was going to be, how do we protect the data that we collect? And even now, I worry because there’s just so many bots online scraping stuff, and we see bots trying to sort of log into our online forms. And I’m thinking hopefully these are just bots trying to log into a form because it sees the form, versus someone who knows that we’ve got some valuable data here, and if they can get in, they could use that data to add Māori to their models and profit off of that. When you have organizations like Microsoft and Google making hundreds of millions off of selling services to education and government in this country, you know that would be a valuable corpus for them — I’m not saying that they would sort of steal, I don’t know, I’d hope not, but I feel like OpenAI would probably do something like that.

And how do we overcome that? We just tried. We did the best we could do, given the resources we had to ensure that things are safe, and we think they’re relatively safe, although I still get anxiety about it. Some of the other challenges we face are being a bunch of brown people from a community, so there’s the stereotype associated with the area with anyone who might maybe sort of associates to this place. So there were people like, “Ha, you guys can’t do this.” And we proved them wrong. They were even funders who were Māori, who actually thought, “These guys are crazy, but you know what, this is exactly what we need to find. We need to find like people who are crazy and who might actually pull this off because it would be quite beneficial.” 

We’ve had other people inquire as to why our organization got science funding to do science research. I actually have a master’s in science — I actually have two masters in science, although one’s a business science degree, whatever that means — but there was this quite racist media organization on the south island of this country who did an official Information Act request on our organization, saying, “Why is this Māori media company getting science-based funding? They don’t know anything about science.” We actually had a scientist at our organization, and they didn’t, so this is some of the more interesting challenges that we’ve come across in this journey of going from radio broadcasting and television broadcasting to actually being a tech company and doing science and research. So it’s the racism and the discrimination that we’ve had to overcome as well. In some cases, we think we’ve been denied funding because our organization is Māori, and we’ve had to often do the hard work first off the smell of an oily rag, as they say here, to prove that we are capable of doing the work for people to recognize that, yeah, they can actually fund us. And that we can deliver results based on the stipulations of the fund or whatever when you’re getting science-based funding grants and stuff like that. I think we’ve shown the government that you don’t need to be a large university to actually do good research and have an impact in the science community. But it certainly hasn’t been easy.

<figcaption class="wp-element-caption">Keoni Mahelona at Mozilla’s Rise25 award ceremony in October 2023.</figcaption>

I imagine even with how long you’ve been there and how long you guys have been doing this, that there’s still an ongoing feeling of anxiety that’s extremely frustrating.

We’re a nonprofit, so a lot of our money comes from government funding, and we’re also a broadcaster, so we have public broadcasting funding that fades some of the work we do and then you science-based funding. 

The New Zealand political environment right now is absolutely terrible. There have been hundreds, probably thousands, of job cuts in the government. The current coalition government needs to raise something like three billion dollars for tax cuts for landlords, and in order to do that, they’re just slashing a lot of funding and projects and people’s jobs in government. There’s this rhetoric that’s been peddled that the government is quite inefficient, and we’re just hemorrhaging money and all these stupid positions and things like that. So that also gives us an anxiety, because a changing government might affect funding that is available to our organization. So we also have to deal with that as being a charity and not sort of being a capitalist organization.

The other thing that gives us anxiety is the inevitable, right? I actually think it’s inevitable, unfortunately, that these big tech companies will eventually be able to sort of replicate our languages. They won’t be good. They’ll never be good and good to the point where it will truly benefit and move our people forward. But they will be good enough that they will be able to profit from it. It profits by giving it that reputation of providing that service, ensuring you continue to go to Google, where you’re then served ads, and so they’re not selling the translation, but they are selling ads alongside it for profit, right? We see this essentially happening with a lot of Indigenous languages, where there is enough data being put online that these mostly American big tech corporations will profit from. And the sad thing is that it was the Americans in the first place and these other colonial nations that fought to make our languages extinct. And now their corporations stand to profit from the languages that they tried to make extinct. So it’s really terrible.

How do you think some of these bigger corporations can be more respectful, inclusive, and supportive of Indigenous communities?

That’s an interesting question. I guess the first question is, should they be inclusive? Because sometimes the best thing to do is just stay away and let us get on with it. We don’t need your help. The unfortunate reality is that so many of our people are on Facebook and are on Google, or whatever — the platforms are so dominating or imperialist that we have to use them in some cases, and because English is the dominant language on these platforms, especially for many Indigenous communities where they are colonized by English-speaking nations, it means that you’re just going to continue to be bombarded with English and not have a space if you don’t go out of your way to make a space and to sort of speak your language. It’s a bit of a catch-22, but I think it’s up to the communities to figure that one out because we could collectively come together as community and be like, “We’re not. We never expect Facebook or whatever to support our language and all these other tech companies and platforms.” And that’s fine, let’s go out into our own environment in our own communities and speak in languages rather than trying to rely on these tech companies to sort of do it for us, right?

There are ways that they can actually just kind of help, but like, stay out of our business.

And that’s the better way to do it, because this sort of outsider coming in trying to save us, it just doesn’t work. I’ve been advocating that you have to support these communities to lead the solutions and what they see is best for their people, because Google doesn’t know what’s best for these communities. So they need to support the communities, and I don’t mean by like building the language technologies themselves and selling it back to them, that is not the support I’m talking about. The support is staying away or giving them discounts on resources or giving them resources so that they can build, and they can lead, because then you’re also upskilling them. 

What do you think is the biggest challenge that we face in the world this year on and offline? And how do we combat it?

I see stuff happening to the Fediverse, which is interesting. Something that happened recently was some guy who very much knows and in his blog post identified as a tech bro from Silicon Valley, made the universal decision that the best thing to do for everybody is to hook up Threads and the Fediverse, so that people in Threads can access stuff in Mastodon etc., and then likewise the other way around. And this is like a single dude who apparently had talked to people and decided it was his duty or mission to connect Threads to the Fediverse, and it was just like, are you joking? And then there’s this other thing going on now, where there are these similar types of dudes getting angry at some instances for blocking other instances because they have people who are like racist or misogynist, and they’re getting angry at these moderators who are doing what the point of the Fediverse is, right? Where you can create a safe space and decide who gets to come in and who doesn’t. What I’m getting at is, I think that as the Fediverse kind of grows, it’s going to be interesting to see what sort of problems comes and how the things that we wanted to escape by leaving Twitter and jumping on Mastodon are kind of coming in. And I think that’s going to be interesting to see how we deal with that.

This is again where the incompatibility of capitalism and general communities sort of comes to play because if we have for-profit companies trying to do Fediverse stuff, then essentially, we’re going to get what we already have, because ultimately, at the end of the day you’re trying to maximize for profit. So long as the internet is a place where we have dominating companies trying to maximize for profit, we’re just always going to have more problems, and it’s absolutely terrible and frightening.

But yeah, politics and I think the evolution of the Fediverse are probably the thing that I would be most concerned about. Then there’s also the normal stuff, which is just the theft of data and privacy. 

What is one action that you think everybody should take to make the world and our online lives a little bit better?

I think they should just be more cognizant of the data they decide to put online and don’t just think about how that data affects you as an individual, but how does it affect those who are close to you? How does it affect the communities to which you belong? And how does it affect other people who might be similar to you in that way? 

People need to be respectful of the data and others data and think about their actions online in respect to being good stewards of all data — their own data from their communities, data of others. And whether you should download this thing or steal that thing or whatever. And that’s essentially what I think is my message, for everyone, is to be respectful, but think about data as you would think about your environment and taking care of it and respecting.

We started Rise25 to celebrate Mozilla’s 25th anniversary. What do you hope that people are celebrating in the next 25 years?

The fall of capitalism, I guess. The restoration of the Hawaiian nation — I can continue. Ultimately, I think a lot of problems come back to some very fundamental ways in which society has structured itself.

What gives you hope about the future of our world?

I think actually this younger generation. I had this impression coming out of high school going to university and then kind of seeing the new generation coming through and being confused having perceptions through your generations. When we live stream high school speeches … just the stuff that these kids talk about is amazing. And even sometimes you’re like having a bit of a cry, because it’s so good in terms of the topics they talk about. But to me, that gives me hope there are actually like some really amazing people and young people who will someday fill our shoes and be politicians. That gives me hope that these people still exist despite all the negative stuff that we see today. That’s what I’m hopeful for.

Get Firefox

Get the browser that protects what’s important

The post Keoni Mahelona on promoting Indigenous communities, the evolution of the Fediverse and data protection appeared first on The Mozilla Blog.

The Mozilla BlogHow AI is redefining your sports experience right now

Artificial intelligence has the potential to transform many sectors across the economy. The sports realm isn’t excluded from that. 

New technologies are already changing the way sports are played, viewed and consumed. From enhancing athlete recovery, to live performance tracking and influencing rule changes, AI’s plunge into sports is providing leagues more data and analysis than they’ve ever had.

Let’s start with football. The NFL has arguably led the way in integrating AI into sports. The league has partnered with Amazon Web Services (AWS) since 2017, and at the beginning of 2024 created the Digital Athlete, a tool using AI and machine learning to “build a complete view of players’ experience, which enables NFL teams to understand precisely what individual players need to stay healthy, recover quickly, and perform at their best.” The technology collects data from multiple sources, including game day data using AWS, and essentially takes video and data from training, practice and in-game action. It then uses AWS technology to “run millions of simulations of NFL games and specific in-game scenarios” to identify which players are at the highest risk of injury. Teams use that information to develop injury prevention, training and recovery regimens. The technology was used by all 32 teams this past NFL season.

AI and the NFL’s relationship goes beyond the Digital Athlete initiative. In March, the league implemented a new kickoff rule after predictive analysis identified plays and body positions that most likely lead to injuries. The process included capturing data through chips in players’ shoulder pads, Brian Rolapp, chief media and business officer for the NFL, recently explained at The Washington Post Futurist Tech Summit, which was sponsored by Mozilla.

Consumers get a chance to experience the league’s AI investment, too. During the NFL and Amazon’s “Thursday Night Football” TV broadcasts, viewers have the option to watch games with “Prime Vision,” an alternate broadcast powered by Next Gen stats, a real-time player and ball tracking data system. Prime Vision can do fun things like highlight a potential blitzing defender on a play based on what happens before the ball is snapped. 

At other levels of football, AI is prevalent for the next generation of players with dreams of making it to the NFL. Exos, a sports science-driven performance company based in Arizona, has been employing AI technology for NFL draft hopefuls in the pre-draft training process for years. Several of the top picks in recent years have traveled to Exos’ facility in Phoenix to complete training, shed time off their 40-yard dashes and improve their vertical leaps, among other things. 

“When an athlete arrives, we take them through a robust sports science evaluation process,”  said Anthony Hobgood, Exos’ senior director of performance. “This evaluation gives us critical information about the athletes’ force profile, muscle-to-bone ratio and fundamental movement qualities. For example, some athletes will run faster by putting on more muscle, while others’ performance could be negatively impacted by putting on more muscle. The data we collect allows our team to make informed decisions about the game plans we build for our athletes. Our speed coaches have a combined total of over 40 years of experience training over 1,500 NFL draft prospects. When an athlete decides to train at Exos, they can be confident they are getting the best system ever created for NFL draft preparation.”

This training has paid off: From 2015-2023, Exos has produced 743 draft picks, an average of 83 per year, including 127 first-rounders. Last spring, almost every NFL team except one (Atlanta) drafted an Exos-trained athlete. 

“Our system has been tried and tested for over 25 years and uses data and the latest science in order to ensure our athletes have the very best,” Exos VP of Sport Adam Farrand said.

The NBA has used AI for some time as well. This February, it debuted a generative AI feature at its All-Star tech summit called NB-AI, which aims to enhance and personalize the live game experience for fans. The technology can make game highlights look like an animated superhero movie — think a film based on a certain bug that resides in New York. Here’s how it sketches people:

“Today, AI is creating a similar excitement to what we saw around the early days of the internet,” NBA Commissioner Adam Silver said at the presentation. “Intuitively, most of us have a sense that artificial intelligence is going to change our lives. The question is, ‘How?’”

The WNBA also utilizes similar tech as the NFL’s Digital Athlete, obtaining three-dimensional player and ball-tracking data through its partnership with Genius Sports’ Second Spectrum. WNBA coaches and front office leaders have access to analytical tools, including shot quality, maximum speed and defensive matchup data. 

While the world of baseball has long stood behind its history and tradition, it has also stepped into the revolution of AI — in fun and strategic ways. 

Baseball has been using data and AI to aid in scouting players, player development, injury risk assessment, video analysis and game strategy. There are even AI chatbots that can create scouting reports for MLB players and evaluate them on metrics AI believes are the best representatives of their abilities. Minor league baseball clubs are embracing Uplift Labs, which uses mobile movement tracking and 3D analysis tech for scouting players. The system uses mobile devices to “accurately capture athletic movements in any environment, gaining insights into performance optimization.” 

In February, the Houston Astros were among the first MLB clubs to introduce facial recognition technology to allow fans into their ballpark. (The New York Mets were the first to do this, in 2021.) The San Francisco Giants even used AI and machine learning to understand what giveaway products they should offer fans for game promos.

AI’s capabilities in the sports world are only expanding as the technology evolves at an extraordinary pace. This shift provides major sports leagues opportunities to continue to improve their product on and off the field, while giving fans an exciting way to experience the games they love.

But there’s still a human element in sports we can’t ignore as these advancements continue. The human aspect is what makes sports so great, after all. While AI can provide teams data about why a basketball player is struggling to shoot the ball well, for example, it has limitations and can’t replace a coach evaluating that player’s performance on video, talking and empathizing with them and coaching them through their struggles. The human interaction — not AI — builds trust between athletes and coaches to navigate those situations.  Or, while a team like the Giants can certainly utilize AI to determine fan giveaways, going to a tailgate and talking directly to fans about what they’d like to see incorporated at games is a better route. AI can never bench the human side of the sports experience, it should be utilized as a resource for leagues, players and coaches while still prioritizing the human element.

Protecting these sports, while following laws and regulations, is important to remember as the excitement around these tools grows. The work teams and leagues need to do to preserve the history and human side of these sports while progressing them forward and ensuring ethical AI is powered is critical.

Device-level encryption from a name you can trust

Try Mozilla VPN

The post How AI is redefining your sports experience right now appeared first on The Mozilla Blog.

The Mozilla Thunderbird BlogThunderbird for Android / K-9 Mail: April 2024 Progress Report

Welcome to our monthly report on turning K-9 Mail into Thunderbird for Android! Last month you could read about how we found and fixed bugs after publishing a new stable release. This month we start with… telling you that we fixed even more bugs.

Fixing bugs

After the release of K-9 Mail 6.800 we dedicated some time to fixing bugs. We published the first bugfix release in March and continued that work in April.

K-9 Mail 6.802

The second bugfix release contained these changes:

  • Push: Notify user if permission to schedule exact alarms is missing
  • Renamed “Send client ID” setting to “Send client information”
  • IMAP: Added support for the \NonExistent LIST response attribute
  • IMAP: Issue EXPUNGE command after moving without MOVE extension
  • Updated translations; added Hebrew translation

I’m especially happy that we were able to add back the Hebrew translation. We removed it prior to the K-9 Mail 6.800 release due to the translation being less than 70% complete (it was at 49%). Since then volunteers translated the missing bits of the app and in April the translation was almost complete.

Unfortunately, the same isn’t true for the Korean translation that was also removed. It was 69% complete, right below the threshold. Since then there has been no significant change. If you are a K-9 Mail user and a native Korean speaker, please consider helping out.

F-Droid metadata (again?)

In the previous progress report we described what change had led to the app description disappearing on F-Droid and how we intended to fix it. Unfortunately we found out that our approach to fixing the issue didn’t work due to the way F-Droid builds their app index. So we changed our approach once again and hope that the app description will be restored with the next app release.

Push & the permission to schedule alarms

K-9 Mail 6.802 notifies the user when Push is enabled in settings, but the permission to schedule exact alarms is missing. However, what we really want to do is ask the user for this permission before we allow them to enable Push.

This change was completed in April and will be included in the next bugfix release, K-9 Mail 6.803.

Material 3

As briefly mentioned in March’s progress report, we’ve started work on switching the app to Google’s latest version of Material Design – Material 3. In April we completed the technical conversion. The app is now using Material 3 components instead of the Material Design 2 ones.

The next step is to clean up the different screens in the app. This means adjusting spacings, text sizes, colors, and sometimes more extensive changes. 

We didn’t release any beta versions while the development version was still a mix of Material Design 2 and Material 3. Now that the first step is complete, we’ll resume publishing beta versions.

If you are a beta tester, please be aware that the app still looks quite rough in a couple of places. While the app should be fully functional, you might want to leave the beta program for a while if the look of the app is important to you.

Targeting Android 14

Part of the necessary app maintenance is to update the app to target the latest Android version. This is required for the app to use the latest security features and to cope with added restrictions the system puts in place. It’s also required by Google in order to be able to publish updates on Google Play.

The work to target Android 14 is now mostly complete. This involved some behind the scenes changes that users hopefully won’t notice at all. We’ll be testing these changes in a future beta version before including them in a K-9 Mail 6.8xx release.

Building two apps

If you’re reading this, it’s probably because you’re excited for Thunderbird for Android to be finally released. However, we’ve also heard numerous times that people love K-9 Mail and wished the app would stay around. That’s why we’ve announced in December to do just that.

We’ve started work on this and are now able to build two apps from the same source code. Thunderbird for Android already includes the fancy new Thunderbird logo and a first version of a blue theme.

But as you can see in the screenshots above, we’re not quite done yet. We still have to change parts of the app where the app name is displayed to use a placeholder instead of a hard-coded string. Then there’s the About screen and a couple of other places that require app-specific behavior.

We’ll keep you posted.

Releases

In April 2024 we published the following stable release:

The post Thunderbird for Android / K-9 Mail: April 2024 Progress Report appeared first on The Thunderbird Blog.

Mozilla Add-ons BlogDeveloper Spotlight: Port Authority

Port Authority gives you intuitive control over global block settings, notifications, and allow-list customization.

A few years ago a developer known as ACK-J stumbled onto a tech article that revealed eBay was secretly port scanning their customers (i.e. scanning their users’ internet-facing devices to learn what apps and services are listening on the network). The article further claimed there was nothing anyone could do to prevent this privacy compromise. ACK-J took that as a challenge. “After going down many rabbit holes,” he says, “I found that this script, which was port scanning everyone, is in my opinion, malware.”

We spoke with ACK-J to better understand the obscure privacy risks of port scanning and how his extension Port Authority offers unique protections.

Why does port scanning present a privacy risk?

ACK-J: There is a common misconception/ignorance around how far websites are able to peer into your private home network. While modern browsers limit this to an extent, it is still overly permissive in my opinion. The privacy implications arise when websites, such as google.com, have the ability to secretly interact with your router’s administrative interface, local services running on your computer and discover devices on your home network. This behavior should be blocked by the same-origin policy (SOP), a fundamental security mechanism built into every web browser since the mid 1990’s, however due to convenience it appears to be disabled for these requests. This caught a lot of people by surprise, including myself, and is why I wanted to make this type of traffic “opt-in” on my devices.

Do you consider port scanning “malware”? 

ACK-J: I don’t necessarily consider port scanning malware, port scanning is commonplace and should be expected for any computer connected to the internet with a public IP address. On the other hand, devices on our home networks do not have public IP addresses and instead are protected from this scanning due to a technology called network address translation (NAT). Due to the nature of how browsers and websites work, the website code needs to be rendered on the user’s device (behind the protections put in place by NAT). This means websites are in a privileged position to communicate with devices on your home network (e.g. IOT devices, routers, TVs, etc.). There are certainly legitimate use cases for port scanning even on internal networks, the most common being communicating with a program running on your PC such as Discord. I prefer to be able to explicitly allow this type of behavior instead of leaving it wide open by default.

Is there a way to summarize how your extension addresses the privacy leak of port scanning?

ACK-J: Port Authority acts in a similar manner to a bouncer at a bar, whenever your computer tries to make a request, Port Authority will verify that the request is not trying to port scan your private network. If the request passes the check it is allowed in and everything functions as normal. If it fails the request is dropped. This all happens in a matter of milliseconds, but if a request is blocked you will get a notification.

Should Port Authority users expect occasional disruptions using websites that port scan, like eBay?

ACK-J: Nope, I’ve been using it for years along with many friends, family, and 1,000 other daily users. I’ve never received a single report that a website would not allow you to login, check-out, or other expected functionality due to the extension blocking port scans. There are instances where you’d like your browser to communicate with an app on your PC such as Discord, in this case you’ll receive an alert and could add Discord to an allow-list or simply click the “Blocking” toggle to disable blocking temporarily.

Do you see Port Authority growing in terms of a feature set, or do you feel it’s relatively feature complete and your focus is on maintenance/refinement?

ACK-J: I like extensions that serve a specific purpose so I don’t see it growing in features but I’d never say never. I’ve added an allow-list to explicitly permit certain domains to interact with services on your private network. I haven’t enabled this feature on the public extension yet but will soon.

Apart from Port Authority, do you have any plans to develop other extensions?

ACK-J: I actually do! I just finished writing up an extension called MailFail that checks the website you are on for misconfigurations in their email server that would allow someone to spoof emails using their domain. This will be posted soon!


Do you have an intriguing extension development story? Do tell! Maybe your story should appear on this blog. Contact us at amo-featured [at] mozilla [dot] org and let us know a bit about your extension development journey.

The post Developer Spotlight: Port Authority appeared first on Mozilla Add-ons Community Blog.

SUMO BlogMake your support articles pop: Use the new Firefox Desktop Icon Gallery

Hello, SUMO community!

We’re thrilled to roll out a new tool designed specifically for our contributors: the Firefox Desktop Icon Gallery. This gallery is crafted for quick access and is a key part of our strategy to reduce cognitive load in our Knowledge Base content. By providing a range of inline icons that accurately depict interface elements of Firefox Desktop, this resource makes it easier for readers to follow along without overwhelming visual information.

We want your feedback! Join the conversation in our SUMO forum thread to ask questions or suggest new icons. Your feedback is crucial for improving this tool.

Thanks for helping us support the Firefox community. We can’t wait to see how you use these new icons to enrich our Knowledge Base!

Stay engaged and keep rocking the helpful web!

 

The Mozilla BlogJulia Janssen creates art to be an ambassador for data protection

At Mozilla, we know we can’t create a better future alone, that is why each year we will be highlighting the work of 25 digital leaders using technology to amplify voices, effect change, and build new technologies globally through our Rise 25 Awards. These storytellers, innovators, activists, advocates, builders and artists are helping make the internet more diverse, ethical, responsible and inclusive.

This week, we chatted with Julia Janssen, an artist making our digital society’s challenges of data and AI tangible through the form of art. We talked with Julia about what sparked her passion for art, her experience in art school, traveling the world, generative AI and more.

I’m always curious to ask artists what initially sparked their interest early in their lives, or just in general, what sparked their interest in the art that they do. How was that for you?

Janssen: Well, it’s actually quite an odd story, because when I was 15 years old, and I was in high school — at the age of 15, you have to choose the kind of the courses or the direction of your classes, — and I chose higher mathematics and art history, both as substitute classes. And I remember my counselor asked me to his office and said, “why?” Like, “why would you do that like? Difficult mathematics and art and art history has nothing to do with each other.” I just remember saying, like, “Well, I think both are fun,” but also, for me, art is a way to understand mathematics and mathematics is for me a way to make art. I think at an early age, I kind of noticed that it was both of my interests. 

So I started graphic design at an art academy. I also did a lot of different projects around kind of our relationship with technology, using a lot of mathematics to create patterns or art or always calculating things. And I never could kind of grasp a team or the fact that technology was something I was so interested in, but at graduation in 2016, it kind of all clicked together. It all kind of fell into place. When I started reading a lot of books about data ownership rights and early media theories, I was like, “what the hell is happening? Why aren’t we all constantly so concerned about our data? These big corporations are monetizing our attention. We’re basically kind of enslaved by this whole data industry.” It’s insane what’s happening, why is everybody not worried about this?” So, I made my first artwork during my graduation about this. And looking back at the time, I noticed that’s a lot of work during art school was already kind of understanding this — for example, things like terms and conditions. I did a Facebook journal where it was kind of a news anchor of a newsroom, reading out loud timelines and all these things. So I think it was already present in my work, but in 2016, it all kind of clicked together. And from there, things happened.

I found that I learned so much more by actually being in the field and actually doing internships, etc. during college instead of just sitting in the classroom all day. Did you kind of have a similar experience in art school? What’s that like?

Yeah, for sure. I do have some criticisms of how artists teach in schools, I’m not sure if it’s a worldwide thing, but from what I experienced, for example, is that the outside world is a place of freedom — you can express yourself, you can do anything you want. But I also noticed that, for example, privacy data protection was not a typical topic of interest at the time, so most of my teachers didn’t encourage me to kind of stand up for this. To research this, or at least in the way that I was doing it. Or they’d say, “you can be critical to worse tech technology, but saying that privacy is a commodity, that’s not done.” So I felt like, yes, It is a great space, and I learned a lot, but it’s also sometimes a little bit limited, I felt, on trending topics. For example, when I was graduating, everybody was working with gender identities and sustainability, those kinds of these things. And everybody’s focused on it and this was kind of the main thing to do. So I feel like, yeah, there should be more freedom in opening up the field for kind of other interests. 

It’s kind of made me notice that all these kids are worrying so much about failing a grade, for example. But later on in the real world, it’s about surviving and getting your money and all these other things that can go wrong in the process. Go experiment and go crazy, the only thing that you can do is fail a class. It’s not so bad.

There’s a system in place that has been there forever, and so I imagine there’s got to feel draining sometimes and really difficult to break through for a lot of young artists. 

Yeah, for sure. And I think that my teachers weren’t outdated — most of them were semi-young artists in the field as well and then teaching for one day — but I think there is also this culture of “this is how we do it. And this is what we think is awesome.” What I try to teach — so I teach there for, like 6 months as a substitute teacher — I try to encourage people and let them know that I can give you advice and I can be your guide into kind of helping you out with whatever you want, and if I’m not interested in what particularly what you want to, for example, say in a project, then I can help you find a way to make it awesome. How to exhibit it, how to make it bigger or smaller, or how to place it in a room. 

<figcaption class="wp-element-caption">Julia Janssen at Mozilla’s Rise25 award ceremony in October 2023.</figcaption>

I’m curious to know for you as an artist, when you see a piece of art, or you see something, what is the first thing that you look for or what’s something that catches your attention that inspires you when you see a piece of art?

Honestly, it’s the way that is exhibited. I’m always very curious and interested in ways that people display research or kind of use media to express their work. When I’m walking to a museum, I get more enthusiastic from all the kind of things like the mechanics behind kind of hanging situations, and sometimes the work that is there to see, also to find inspirations to do things myself and what I find most thrilling. But also, I’m not just an artist who just kind of creates beautiful things, so to say, like I need information, data, mathematics to start even creating something. And I can highly appreciate people who can just make very beautiful emotional things that just exist. I think that’s a completely different type of art. 

You’ve also have traveled a lot, which is one of the best ways for people to learn and gain new perspectives. How much has traveling the world inspire you in the work that you create? 

I think a good example was last year when I went to Bali, which was actually kind of the weirdest experience in my life, because I hated the place. But it actually inspired a lot of things for my new project, mapping the oblivion, which I also talk about in my Rise25 video. Because what I felt in Bali was that this is just like such a beautiful island, such beautiful culture and nature, and it’s completely occupied by tourists and tourist traps and shiny things to make people feel well. It’s kind of hard to explain what I felt like there, but for example, I was there and what I like to do on a holiday or on a trip is just a lot of hiking, just going into nature, and just walk and explore and make up my mind or not think about anything. In Bali, everything is designed to feel at comfort or at your service, and that just felt completely out of place for me. But I felt maybe this is something that I need to embrace. So looking on Google Maps, I got this recommendation to go to a three-floor swimming pool. It was awesome, but it was also kind of weird, because I was looking like out on the jungle, and I felt like this place really completely ruined this whole beautiful area. It clicked with what I was researching about platformification and frictionlessness, where these platforms or technology or social media timelines try to make you at ease and comfortable, and making decisions be effortless. So on music apps, you click on “Jazzy Vibes” and it will kind of keep you satisfied with some Jazz vibes. But you will not be able to really explore kind of new things. It just kind of goes, as the algorithm goes. But it gives you a way of feeling of being in control. But it’s actually a highly curated playlist based on their data. But what I felt was happening in Bali was that I wanted to do something else, but to make myself more at comfort to kind of find the more safe option, I chose that swimming pool instead of exploring. And I did it based on a recommendation of an app. And then I felt constantly in-between. I actually want to go to more local places, but I had questions — What do I find out, is it safe there to eat there? I should’ve gone to easier places where I’ll probably meet peers and people who I can have a conversation or beer with. But then you go for the easier option, and then I felt I was drifting away from doing unexpected things and exploring what is there. I think that’s just a result of the technology being built around us and kind of conforming us with news that we probably might be interested in seeing, playing some music, etc. but what makes us human in the end is to feel discomfort, to be in awkward positions, to kind of explore something that is out there and unexpected and weird. I think that’s what makes the world we live in great.

Yeah, that’s such a great point. There’s so many people that just don’t naturally and organically explore a city that they’re in. They are always going there and looking at recommendations and things like that. But a lot of times, if you just go and get lost in it, you can see a place for what it is, and it’s fun to do that. 

It’s also kind of the whole fear of missing out on that one special place. But I feel like you’re missing out on so much by constantly looking at the screen and finding recommendations of what you should like or what you should feel happy about. And I think this just kind of makes you blind to everything out there, and it also makes us very disconnected with our responsibility of making choices. 

What do you think is the biggest challenge that we face this year online, or as a society in the world, and how do we combat that?

I mean, for me, I think most of the debate about generative AI is about taking our jobs or taking the job of creatives or writers, lawyers. I think the more fundamental question that we have to ask ourselves with generative AI is about how we will live with it, and still be human. Do we just allow what it can do or do we also take some measurements in what is desirable that it will replace? Because I feel like if we’re just kind of outsourcing all of our choices into this machine, then what the hell are we doing here? We need to find a proper relationship. I’m not saying I’m really not against technology not at all, I also see all the benefits and I love everything that it is creating — although not everything. But I think it’s really about creating this healthy relationship with the applications and the technology around us, which means that sometimes you have to chose friction and do it yourself and be a person. For example, if you are now graduating from university, I think it will be a challenge for students to actively choose to write their own thesis and not just generated by Chat GPT and setting some clever parameters. I think small challenges are something that we are currently all facing, and fixing them is something we have to want. 

What do you think is a simple action everyone can take to make the world online and offline a little better? 

Here in Europe, we have the GDPR. And it says that we have the right to data access and that means that you can ask a company to show the data that they collected about you, and they have to show you within 30 days. I do also a lot of teaching in workshops, at schools or at universities with this, showing how to request your own data. You get to know yourself a different way, which is funny. I did a lot of projects around this by making this in art installations. But I think this a very simple act to perform, but it’s kind of interesting to see, because this is only the raw data — so you still don’t know how they use it in profiling algorithms, but it gives you clarity on some advertisements that you see that you don’t understand, or just kind of understanding the skill of what they are collecting about in every moment. So that is something that I highly encourage, or also another thing that’s in line with that is “The right to be forgotten,” which is also a European right. 

We started to Rise25 to celebrate Mozilla’s 25th anniversary. What do you hope people are celebrating in the next 25 years?

That’s a nice question. I hope that we will be celebrating an internet that is not governed by big tech corporations, but based on public values and ethics. I think some of the smaller steps in that can be, for example, currently in Europe, one of the foundations of processing your data is having informed consent. Informed consent is such a beautiful term originating from the medical field where a doctor kind of gives you information about the procedure and the possibility, risks and everything. But on the internet, that is just kind of applied in this small button like, “Hey, click here” and give up all your rights and continue browsing without questioning. And I think one step is to kind of get a real, proper, fair way of getting consent, or maybe even switching it around, where the infrastructure of data collection is not the default, but instead it’s “do not collect anything without your consent.”  

We’re currently in a transition phase where there are a lot of very important alternatives to avoid big tech applications. Think about how Firefox already doing so much better than all these alternatives. But I think, at the core, like all our basic default apps should not be encouraged by commercial driven, very toxic incentives to just modify your data, and that has to do with how we design this infrastructure, the policies and the legislations, but also the technology itself and the kind of protocol layers of how it’s working. This is not something that we can change overnight. I hope that we’re not only thinking in alternatives to avoid kind of these toxic applications, big corporations, but that not harming your data, your equality, your fairness, your rights are the default.

In our physical world, we value things like democracy and equality and autonomy and freedom of choice. But on the internet, that is just not present yet, and I think that that should be at the core at the foundation of building a digital world as it should be in our current world. 

What gives you hope about the future of our world?

Things like Rise25, to be honest. I think it was so special. I spoke about this with other winners as well, like, we’re all just so passionate about what we’re doing. Of course, we have inspiration from people around us, or also other people doing work on this, but we’re still not with so many and to be united in this way just gives a lot of hope. 

Get Firefox

Get the browser that protects what’s important

The post Julia Janssen creates art to be an ambassador for data protection appeared first on The Mozilla Blog.

The Mozilla BlogAbbie Richards on the wild world of conspiracy theories and battling misinformation on the internet

At Mozilla, we know we can’t create a better future alone, that is why each year we will be highlighting the work of 25 digital leaders using technology to amplify voices, effect change, and build new technologies globally through our Rise 25 Awards. These storytellers, innovators, activists, advocates, builders and artists are helping make the internet more diverse, ethical, responsible and inclusive.

This week, we chatted with Abbie Richards, a former stand-up comedian turned content creator dominating TikTok as a researcher, focusing on understanding how misinformation, conspiracy theories and extremism spread on the platform. She also is a co-founder of EcoTok, an environmental TikTok collective specializing in social media climate solutions. We talked with Abbie about finding emotional connections with audiences, the responsibility of social media platforms and more.

First off, what’s the wildest conspiracy theory that you have seen online?

It’s hard to pick the wildest because I don’t know how to even begin to measure that. One that I think about a lot, though, is that I tend to really find the spirituality ones very interesting. There’s the new Earth one with people who think that the earth is going to be ascending into a higher dimension. And the way that that links to climate change — like when heat waves happen, and when the temperature is hotter than normal, and they’re like “it’s because the sun’s frequency is increasing because we’re going to ascend into a higher dimension.” And I am kind of obsessed with that line of thought. Also because they think that if you, your soul, vibrate at a high enough frequency — essentially, if your vibes are good enough — you will ascend, and if not, you will stay trapped here in dystopian earth post ascension which is wild because then you’re assigning some random, universal, numerical system for how good you are based on your vibrational frequency. Where is the cut off? At what point of vibrating am I officially good enough to ascend, or am I going to always vibrate too low? Are my vibes not good? And do I not bring good vibes to go to your paradise? I think about that one a lot.

As someone who has driven through tons of misinformation and conspiracy theories all the time, what do you think are the most common things that people should be able to notice when they need to be able to identify if something’s fake? 

So I have two answers to this. The first is that the biggest thing that people should know when they’re encountering this information and conspiracy theories online is that they need to check in with how a certain piece of information makes them feel. And if it’s a certain piece of information that they really, really want to believe, they should be especially skeptical, because that’s the number one thing. Not whether they can recognize something like that or if AI-generated human ears are janky. It’s the fact that they want to believe what the AI generated deepfake is saying and no matter how many tricks we can tell them about symmetry and about looking for clues that it is a deepfake, fundamentally, if they want to believe it, the thing will still stick in their brain. And they need to learn more about the emotional side of encountering this misinformation and conspiracy theories online. I would prioritize that over the tiny little tricks and tips for how to spot it, because really, it’s an emotional problem. When people lean into and dive into conspiracy theories, and they fall down a rabbit hole, it’s not because they’re not media literate enough. Fundamentally, it’s because it’s emotionally serving something for them. It’s meeting some sort of emotional psychological epistemic need to feel like they have control, to feel like they have certainty to feel like they understand things that other people don’t, and they’re in on knowledge to feel like they have a sense of community, right? Conspiracy theories create senses of community and make people feel like they’re part of a group. There are so many things that it’s providing that no amount of tips and tricks for spotting deepfakes will ever address. And we need to be addressing those. How can we help them feel in control? How can we help them feel empowered so that they don’t fall into this?

The second to me is wanting to make sure that we’re putting the onus on the platforms rather than the people to decipher what is real and not real because people are going to consistently be bad at that, myself included. We all are quite bad at determining what’s real. I mean, we’re encountering more information in a day than our brains can even remotely keep up with. It’s really hard for us to decipher which things are true and not true. Our brains aren’t built for that. And while media literacy is great, there’s a much deeper emotional literacy that needs to come along with it, and also a shifting of that onus from the consumer onto the platforms.

<figcaption class="wp-element-caption">Abbie Richards at Mozilla’s Rise25 award ceremony in October 2023.</figcaption>

What are some of the ways these platforms could take more responsibility and combat misinformation on their platforms?

It’s hard. I’m not working within the platforms, so it’s hard to know what sort of infrastructure they have versus what they could have. It’s easy to look at what they’re doing and say that it’s not enough because I don’t know about their systems. It’s hard to make specific recommendations like “here’s what you should be doing to set up a more effective …”. What I can say is that, without a doubt, these mega corporations that are worth billions of dollars certainly have the resources to be investing in better moderation and figuring out ways to experiment with different ways. Try different things to see what works and encourage healthier content on your platform. Fundamentally, that’s the big shift. I can yell about content moderation all day, and I will, but the incentives on the platforms are not to create high quality, accurate information. The incentives on all of these platforms are entirely driven by profit, and how long they can keep you watching, and how many ads they can push to you, which means that the content that will thrive is the stuff that is the most engaging, which tends to be less accurate. It tends to be catering to your negative emotions, catering to things like outrage and that sort of content that is low quality, easy to produce, inaccurate, highly emotive content is what is set up to thrive on the platform. This is not a system that is functional with a couple of flaws, this misinformation crisis that we’re in is very much the results of the system functioning exactly as it’s intended.

What do you think is the biggest challenge we face in the world this year on and offline? 

It is going to be the biggest election year in history. We just have so many elections all around the world, and platforms that we know don’t serve healthy, functional democracy super well, and I am concerned about that combination of things this year.

What do you think is one action that everybody can take to make the world, and our online lives, a little bit better?

I mean, log off (laughs). Sometimes log off. Go sit in silence just for a bit. Don’t say anything, don’t hear anything. Just go sit in silence. I swear to God it’ll change your life. I think we are in a state right now where we are chronically consuming so much information, like we are addicted to information, and just drinking it up, and I am begging people to at least just like an hour a week to not consume anything, and just see how that feels. If we could all just step back for a little bit and log off and rebel a little bit against having our minds commodified for these platforms to just sell ads, I really feel like that is one of the easiest things that people can do to take care of themselves.

The other thing would be check in with your emotions. I can’t stress this enough. Like when you encounter information, how does that information make you feel? How much do you want to believe that information and those things. So very much, my advice is to slow down and feel your feelings.

We started Rise25 to celebrate Mozilla’s 25th anniversary, what do you hope people are celebrating in the next 25 years?

I hope that we’ve created a nice socialist internet utopia where we have platforms that people can go interact and build community and create culture and share information and share stories in a way that isn’t driven entirely by what’s the most profitable. I’d like to be celebrating something where we’ve created the opposite of a clickbait economy where everybody takes breaks. I hope that that’s where we are at in 25 years.

What gives you hope about the future of our world?

I interact with so many brilliant people who care so much and are doing such cool work because they care, and they want to make the world better, and that gives me a lot of hope. In general. I think that approaching all of these issues from an emotional lens and understanding that, people in general just want to feel safe and secure, and they just want to feel like they know what’s coming around the corner for them, and they can have their peaceful lives, is a much more hopeful way to think about pretty scary kind of political divides. I think that there is genuinely a lot more that we have in common than there are things that we have differences. It’s just that right now, those differences feel very loud. There are so many great people doing such good work with so many different perspectives, and combined, we are so smart together. On top of that, people just want to feel safe and secure. And if we can figure out a way to help people feel safe and secure and help them feel like their needs are being met, we could create a much healthier society collectively.

Get Firefox

Get the browser that protects what’s important

The post Abbie Richards on the wild world of conspiracy theories and battling misinformation on the internet appeared first on The Mozilla Blog.

Mozilla Add-ons Blog1000+ Firefox for Android extensions now available

The new open ecosystem of extensions on Firefox for Android launched in December with just over 400 extensions. Less than five months later we’ve surpassed 1,000 Firefox for Android extensions. That’s an impressive achievement by this developer community! It’s exciting to see so many developers embrace the opportunity to explore new creative possibilities for mobile browser customization.

If you’re a developer intrigued to learn more about building extensions on Firefox for Android, here’s a great place to get started. Or maybe you already have some feedback about missing API’s on Firefox for Android?

What are some of your favorite new Firefox for Android extensions? Drop some props in the comments below.

The post 1000+ Firefox for Android extensions now available appeared first on Mozilla Add-ons Community Blog.

Mozilla L10NL10n report: May 2024 Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 

New content and projects

What’s new or coming up in Firefox desktop

To start, a “logistic” announcement: on April 29 we changed the configuration of the Firefox project in Pontoon to use a different repository for source (English) strings. This is part of a larger change that will move Firefox development from Mercurial to Git.

While the change was mostly transparent for localizers, there is an added benefit: as part of the Firefox project, you will now be able to localize about 40 strings that are used by GeckoView, the core of our Android browsers (Firefox, Focus). For your convenience, these are grouped in a specific tag called GeckoView. Since these are mostly old strings dating back to Fennec (Firefox for Android up to version 68), you will also find that existing translations have been imported — in fact, we imported over 4 thousand translations.

Going back to Firefox desktop, version 127 is currently in Nightly, and will move to Beta on May 13. Over the past few weeks there have been a few new features and updates that’s it’s worth testing to ensure the best experience for users.

You are probably aware of the Firefox Translations feature available for a growing number of languages. While this feature was originally available for full-page translation, now it’s also possible to select text in the page and translate it through the context menu.

Screenshot of the translation selection feature in Firefox.

Screenshot of the Translation selection feature in Firefox.

Reader Mode is also in the process of getting a redesign, with more controls to customize the user experience.

Screenshot of the Reader Mode settings in Firefox Nightly.

Screenshot of the Reader Mode settings in Firefox Nightly.

The New Tab page has a new wallpaper function: in order to test it, go to about:config (see this page if you’re unfamiliar), search for browser.newtabpage.activity-stream.newtabWallpapers.enabled and flip its value to true (double-click will work). At this point, open a new tab and click the gear icon in the top-right corner. Note that the available wallpapers change depending on the current theme (dark vs light).

Screenshot of New Tab wallpaper selection in Nightly.

Screenshot of New Tab wallpaper selection in Nightly.

Last but not least, make sure to test the new features available in the integrated PDF Reader, in particular the dialog to add images and highlight elements in the page.

Screenshot of the PDF Viewer in Firefox, with the "Add image" UI.

Screenshot of the PDF Viewer in Firefox, with the “Add image” UI.

What’s new or coming up in mobile

The mobile team is currently redesigning the app menus in Firefox Android and iOS. There will be many new menu strings landing in the upcoming versions (you may have already noticed some prelanding), including some dynamic menu text that may get truncated for some locales – especially on smaller screens.

Testing for this type of localization issues will be a focus: we’ll set expectations for it soon and send testing instructions (v130 or v131 releases are currently the target). Strings will be making their way incrementally in the new menus available through Firefox Nightly, allowing enough time for localizers to translate and test continuously.

What’s new or coming up in web projects

Mozilla.org

The mozilla.org team is creating a regular cleanup routine by labeling the soon-to-be replaced strings with an expiration date, usually two months after the string has become obsolete. This approach will minimize communities’ time localizing strings no longer used. In other words, if you see a string labeled with a date, please skip it. Below is an example, and in this case, you want to localize the v2 string:

example-v2 = Security, reliability and speed — on every device, anywhere you go.

# Obsolete string (expires: 2024-03-18)
example = Security, reliability and speed — from a name you can trust.

Relay Website

This product is in maintenance mode and it will not be open for new locales until we remove obsolete strings and revert the content migration to mozilla.org (see also l10n report from November 2023).

What’s new or coming up in SUMO

  • Konstantina is joining the SUMO force! She moved from the Marketing team to the Customer Experience team in late Q1. If you haven’t get to know her, please don’t hesitate to say hi!
  • AI spam has been a big issue in our forum lately, so we decided to spin up a new contributor policy around the use of AI-generated tools. Please check this thread if you haven’t!
  • We opened an AAQ for NL in our support forum. Thanks to Tim Maks and the rest of the NL community, who’ve been very supportive of this work.
  • Are you contributing to our Knowledge Base? You may want to read the recent blog posts from the content team to get to know more about what they’re up to. In short, they’re doing a lot around freshening up our knowledge base articles.
  • Wanna know more about what we’ve done in Q1 2024, read the recap here.

What’s new or coming up in Pontoon

Large Language Model (LLM) Integration

We’re thrilled to announce the integration of LLM-assisted translations into Pontoon! For all locales utilizing Google Translate as a translation source, a new AI-powered option is now available within the ‘Machinery’ tab. This feature enhances Google Translate outputs by leveraging a Large Language Model (LLM). Users can now tailor translations to be more formal or informal and rephrase text for clarity and tone.

Since January, our team has conducted extensive research to explore how other localization services are utilizing AI. We specifically focused on comparing the capabilities of Large Language Models (LLMs) against traditional machine translation methods and identifying industry best practices.

Our findings revealed that while tools like Google Translate provide a solid foundation, they sometimes fall short, often translating text too literally. Recognizing the potential for improvement, we introduced functionality within Pontoon to adjust the tone and refine phrases directly.

For example, consider the phrase “Firefox has your back” translated in the Italian locale. The suggestion provided by Google’s machine translation is literal and incorrect (“Firefox covers your shoulders”). The images below demonstrate the use of the “Rephrase” option:

Screenshot of the LLM feature in Pontoon (before selecting a command).

Dropdown to use the LLM feature

Screenshot of the LLM feature in Pontoon (after selecting the rephrase command).

Enhanced translation output from the LLM rephrasing the initial Google Translate result.

Furthering our community engagement, on April 29th, we hosted a Localization Fireside Chat. During this session, we discussed the new feature in depth and provided a live demonstration. Catch the highlights of our discussion at the following recordings (the LLM feature is discussed at the 7:22 mark):

Performance improvements

At the end of the last year we’ve asked Mozilla localizers what areas of Pontoon would they like to see improved. Performance optimizations were one of the top-voted requests and we’re happy to report we’ve landed several speedups since the beginning of the year.

Most notable improvements were made to the dashboards, with Contributors, Insights and Tags pages now loading in a fraction of the time they took to load earlier in the year. We’ve also improved the loading times of Permissions tab, Notifications page and some filters.

As shown in the chart below, almost all the pages and actions will now take less time to load.

Chart showing the apdex score of several views in Pontoon.

Chart showing the improved apdex score of several views in Pontoon.

Events

Watch our latest localization virtual events here.

Want to showcase an event coming up that your community is participating in? Contact us and we’ll include it.

Useful Links

Questions? Want to get involved?

If you want to get involved, or have any question about l10n, reach out to:

Did you enjoy reading this report? Let us know how we can improve it.

The Mozilla Thunderbird BlogThunderbird Monthly Development Digest: April 2024

Graphic with text "Thunderbird Development Digest April 2024," featuring abstract ASCII art on a dark Thunderbird logo background.

Hello Thunderbird Community, and welcome back to the monthly Thunderbird development digest. April just ended and we’re running at full speed into May. We’re only a couple of months away from the next ESR, so things are landing faster and we’re seeing the finalization of a lot of parallel efforts.

20-Year-Old bugs

Something that has been requested for almost 20 years finally landed on Daily. The ability to control the display of recipients in the message list and better distinguish unknown addresses from those saved in the Address Book was finally implemented in Bug 243258 – Show email address in message list.

This is one of the many examples of features that in the past were very complicated and tricky to implement, but that we were finally able to address thanks to the improvements of our architecture and being able to work with a more flexible and modular code.

We’re aiming at going through those very very old requests and slowly addressing them when possible.

Exchange alpha

More Exchange support improvements and features are landing on Daily almost…daily (pun intended). If you want to test things with a local build, you can follow this overview from Ikey.

We will soon look at the possibility of enabling Rust builds by default, making sure that all users will be able to consume our Rust code from next beta, and only needing to switch a pref in order to test Exchange.

Folder compaction

If you’ve been tracking our most recent struggles, you’re probably aware of one of the lingering annoying issues which sees the bubbling up of the size of the user profile caused by local folder corruption.

Ben dive bombed into the code and found a spaghetti mess that was hard to untangle. You can read more about his exploration and discoveries in his recent post on TB-Planning.

We’re aiming to land this code hopefully before the end of the week and start calling for some testing and feedback from the community to ensure that all the various issues have been addressed correctly.

You can follow the progress in Bug 1890448 – Rewrite folder compaction.

Cards View

If you’re running Beta or Daily, you might have noticed some very fancy new UI for the Cards View. This has been a culmination of many weeks of UX analysis to ensure a flexible and consistent hover, selection, and focus state.

Micah and Sol identified a total of 27 different interaction states on that list, and implementing visual consistency while guaranteeing optimal accessibility levels for all operating systems and potential custom themes was not easy.

We’re very curious to hear your feedback.

Context menu

A more refined and updated context menu for the message list also landed on Daily.

A very detailed UX exploration and overview of the implementation was shared on the UX Mailing list a while ago.

This update is only the first step of many more to come, so we apologize in advance if some things are not super polished or things seem temporarily off.

ESR Preview

If you’re curious about what the next ESR will look like or checking new features, please consider downloading and installing Beta (preferably in another directory to not override your current profile.) Help us test this new upcoming release and find bugs early.

As usual, if you want to see things as they land you can always check the pushlog and try running daily, which would be immensely helpful for catching bugs early.

See ya next month.

Alessandro Castellani (he, him)
Director, Desktop and Mobile Apps

If you’re interested in joining the technical discussion around Thunderbird development, consider joining one or several of our mailing list groups here.

The post Thunderbird Monthly Development Digest: April 2024 appeared first on The Thunderbird Blog.

SUMO BlogWhat’s up with SUMO — Q1 2024

Hi everybody,

It’s always exciting to start a new year as it provides renewed spirit. Even more exciting because the CX team welcomed a few additional members this quarter, including Konstantina, who will be with us crafting better community experiences in SUMO. This is huge, since the SUMO community team has been under resourced for the past few years. I’m personally super excited about this. There are a few things that we’re working on internally, and I can’t wait to share them with you all. But first thing first, let’s read the recap of what happened and what we did in Q1 2024!

Welcome note and shout-outs

  • Thanks for joining the Social and Mobile Store Support program!
  • Welcome back to Erik L and Noah. It’s good to see you more often these days.
  • Shout-outs to Noah and Sören for their observations during the 125 release so we can take prompt actions on bug1892521 and bug1892612. Also, special thanks to Paul W for his direct involvement in the war room for the NordVPN incident.
  • Thanks to Philipp for his consistency in creating desktop thread in the contributor forum for every release. Your help is greatly appreciated!
  • Also huge thanks to everybody who is involved in the Night Mode removal issue on Firefox for iOS 124. In the end, we decided to end the experiment early, since many people raised concern about accessibility issues. This really shows the power of community and users’ feedback.

If you know someone who you’d like to feature here, please contact Kiki, and we’ll make sure to add them in our next edition.

Community news

  • As I mentioned, we started the year by onboarding Mandy, Donna, and Britney. If that’s not enough, we also welcomed Konstantina, who moved from Marketing to the CX team in March. If you haven’t got to know them, please don’t hesitate to say hi when you can.
  • AI spam has been a big issue in our forum lately, so we decided to spin up a new contributor policy around the use of AI-generated tools. Please check this thread if you haven’t!
  • We participated in FOSDEM 2024 in Brussels and it was a blast! It’s great to be able to meet face to face with many community members after a long hiatus since the pandemic. Kiki and the platform team also presented a talk in the Mozilla devroom. We also shared free cookies (not a tracking one) and talked with many Firefox fans from around the globe. All in all, it was a productive weekend, indeed.
  • We added a new capability in our KB to set restricted visibility on specific articles. This is a staff-only feature, but we believe it’s important for everybody to be aware of this. If you haven’t, please check out this thread to get to know more!
  • Please be aware of Hubs sunset plan from this thread.
  • We opened an AAQ for NL in our support forum. Thanks to Tim Maks and the rest of the NL community, who’ve been very supportive of this work.
  • We’ve done our usual annual contributor survey in March. Thank you to every one of you who filled out the survey and shared great feedback!
  • We change something around how we communicate product release updates through bi-weekly scrum meetings. Please be aware of it by checking out this contributor thread.
  • Are you contributing to our Knowledge Base? You may want to read the recent blog posts from the content team to get to know more about what they’re up to. In short, they’re doing a lot around freshening up our knowledge base articles.

Stay updated

  • Join our discussions in the contributor forum to see what’s happening in the latest release on Desktop and mobile.
  • Watch the monthly community call if you haven’t. Learn more about what’s new in January, and March (we canceled February)! Reminder: Don’t hesitate to join the call in person if you can. We try our best to provide a safe space for everyone to contribute. You’re more than welcome to lurk in the call if you don’t feel comfortable turning on your video or speaking up. If you feel shy to ask questions during the meeting, feel free to add your questions on the contributor forum in advance, or put them in our Matrix channel, so we can answer them during the meeting.
  • If you’re an NDA’ed contributor, you can watch the recording of our Firefox Pod Meeting from AirMozilla to catch up with the latest train release. You can also subscribe to the AirMozilla folder by clickling on the Subscribe button at the top right corner of the page to get notifications each time we add a new recording.
  • Consider subscribing to Firefox Daily Digest to get daily updates (Mon-Fri) about Firefox from across the internet.
  • Check out SUMO Engineering Board to see what the platform team is cooking in the engine room. Also, check out this page to see our latest release notes

Community stats

KB

KB pageviews

Month Page views Vs previous month
Jan 2024 6,743,722 3.20%
Feb 2024 7,052,665 4.58%
Mar 2024 6,532,175 -7.38%
KB pageviews number is a total of English (en-US) KB pageviews

Top 5 KB contributors in the last 90 days: 

KB Localization

Top 10 locales based on total page views

Locale/pageviews

Jan 2024

Feb 2024

Mar 2024 

Localization progress (per Apr, 23)
de 2,425,154 2,601,865 2,315,952 92%
fr 1,559,222 1,704,271 1,529,981 81%
zh-CN 1,351,729 1,224,284 1,306,699 100%
es 1,171,981 1,353,200 1,212,666 25%
ja 1,019,806 1,068,034 1,051,625 34%
ru 801,370 886,163 812,882 100%
pt-BR 661,612 748,185 714,554 42%
zh-TW 598,085 623,218 366,320 3%
It 533,071 575,245 529,887 96%
pl 489,532 532,506 454,347 84%
Locale pageviews is an overall pageview from the given locale (KB and other pages)

Localization progress is the percentage of localized articles from all KB articles per locale

Top 5 localization contributors in the last 90 days: 

Forum Support

Forum stats

Month Total questions Answer rate within 72 hrs Solved rate within 72 hrs Forum helpfulness
Jan 2024 2999 72.6% 10.8% 61.3%
Feb 2024 2766 72.4% 9.5% 65.6%
Mar 2024 2516 71.5% 10.4% 71.6%

Top 5 forum contributors in the last 90 days:

Social Support

Month Total replies Total interactions Respond conversion rate
Jan 2024 33 46 71.74%
Feb 2024 25 65 38.46%
Mar 2024 14 87 16.09%

Top 5 Social Support contributors in the past 3 months: 

 

Play Store Support

Month Total replies Total interactions Respond conversion rate
Jan 2024 76 276 27.54%
Feb 2024 49 86 56.98%
Mar 2024 47 80 58.75%

Top 5 Play Store contributors in the past 3 months:

Stay connected

Open Policy & AdvocacyThe UK’s Digital Markets, Competition, and Consumers Bill will spark the UK’s digital economy, not stifle it

In today’s digital age, an open and competitive ecosystem with a diverse range of players is essential for building a resilient economy. New products and ideas must have the opportunity to grow to give people meaningful choices. Yet, this reality often falls short due to the dominance of a handful of large companies that create walled gardens by self-preferencing their services over independent competitors  –  limiting choice and hampering innovation.

The UK’s Digital Markets, Competition, and Consumers Bill (DMCCB) offers a unique opportunity to break down these barriers, paving the way for a more competitive and consumer-centric digital market. On the competition side, the DMCCB offers flexibility in allowing for targeted codes of conduct to regulate the behaviour of dominant players. This agile and future-proof approach makes it unique in the ex-ante interventions being considered around the world to rein in abuse in digital markets. An example of what such a code of conduct might look like in practice is the voluntary commitments given by Google to the CMA in the Privacy Sandbox case.

Mozilla, in line with our long history of supporting pro-competition regulatory interventions, supports the DMCCB and its underlying goal of fostering competition by empowering consumers. However, to truly deliver on this promise, the law must be robust, effective, and free from loopholes that could undermine its intent.

Last month, the House of Lords made some much needed improvements to the DMCCB – which are now slated to be debated in the House of Commons in late April/early May 2024. A high-level overview of the key positive changes and why they should remain a part of the law are:

  • Time Limits: To ensure the CMA can act swiftly and decisively, its work should be free from undue political influence. This reduces opportunities for undue lobbying and provides clarity for both consumers and companies. While it would be ideal for the CMA to be able to enforce its code of conduct, Mozilla supports the House of Lords’ amendment to introduce a 40-day time limit for the Secretary of State’s approval of CMA guidance. This is a crucial step in avoiding delays and ensuring effective enforcement. The government’s acceptance of this approach and the alternative proposal of 30 working days for debate in the House of Commons is a positive sign, which we hope is reflected in the final law.
  • Proportionality: The Bill’s approach to proportionality is vital. Introducing prohibitive proportionality requirements on remedies could weaken the CMA’s ability to make meaningful interventions, undermining the Bill’s effectiveness. Mozilla endorses the current draft of the Bill from the House of Lords, which strikes a balance by allowing for effective remedies without excessive constraints.
  • Countervailing Benefits: Similarly, the countervailing benefits exemption to CMA’s remedies, while powerful, should not be used as a loophole to justify anti-competitive practices. Mozilla urges that this exemption be reserved for cases of genuine consumer benefit by restoring the government’s original requirement that such exemptions are “indispensable”, ensuring that it does not become a ‘get out of jail free’ card for dominant players.

Mozilla remains committed to supporting the DMCCB’s swift passage through Parliament and ensuring that it delivers on its promise to empower consumers and promote innovation. We launched a petition earlier today to help push the law over the finish line. By addressing the key concerns we’ve highlighted above and maintaining a robust framework, the UK can set a global standard for digital markets and create an environment where consumers are truly in charge.

The post The UK’s Digital Markets, Competition, and Consumers Bill will spark the UK’s digital economy, not stifle it appeared first on Open Policy & Advocacy.

Open Policy & AdvocacyWork Gets Underway on a New Federal Privacy Proposal

At Mozilla, safeguarding privacy has been core to our mission for decades — we believe that individuals’ security and privacy on the Internet are fundamental and must not be treated as optional. We have long advocated for a federal privacy law to ensure consumers have control over their data and that companies are accountable for their privacy practices.

Earlier this month, House Committee on Energy and Commerce Chair Cathy McMorris Rodgers (R-WA) and Senate Committee on Commerce, Science and Transportation Chair Maria Cantwell (D-WA) unveiled a discussion draft of the American Privacy Rights Act of 2024 (APRA). The Act is a welcome bipartisan effort to create a unified privacy standard across the United States, with the promise of finally protecting the privacy of all Americans.

At Mozilla, we are committed to the principle of data minimization – a concept that’s fundamental in effective privacy legislation – and we are pleased to see it at the core of APRA. Data minimization means we conscientiously collect only the necessary data, ensure its protection, and provide clear and concise explanations about what data we collect and why. We are also happy to see additional strong language from the American Data Privacy and Protect Act (ADPPA) reflected in this new draft, including non-discrimination provisions and a universal opt-out mechanism (though we support clarification that ensures allowance of multiple mechanisms).

However, the APRA discussion draft has open questions that must be refined. These include how APRA handles protections for children, options for strengthening data brokers provisions even further (such as a centralized mechanism for opt-out rights), and key definitions that require clarity around advertising. We look forward to engaging with policymakers as the process advances.

Achieving meaningful reform in the U.S. is long overdue. In an era where digital privacy concerns are on the rise, it’s essential to establish clear and enforceable privacy rights for all Americans. Mozilla stands ready to contribute to the dialogue on APRA and collaborate toward achieving comprehensive privacy reform. Together, we can prioritize the interests of individuals and cultivate trust in the digital ecosystem.

 

The post Work Gets Underway on a New Federal Privacy Proposal appeared first on Open Policy & Advocacy.

Open Policy & AdvocacyNet Neutrality is Back!

Yesterday, the Federal Communications Commission (FCC) voted 3-2 to reinstate net neutrality rules and protect consumers online. We applaud this decision to keep the internet open and accessible to all, and reverse the 2018 roll-back of net neutrality protections. Alongside our many partners and allies, Mozilla has been a long time proponent of net neutrality across the world and in U.S. states, and mobilized hundreds of thousands of people over the years.

The new FCC order reclassifies broadband internet as a “telecommunications service” and prevents ISPs from blocking, throttling, or paid prioritization of traffic. This action restores meaningful and enforceable FCC oversight and protection on the internet, and unlocks innovation, competition, and free expression online.

You can read Mozilla’s submission to the FCC on the proposed Safeguarding and Securing the Open Internet rules in December 2023 here and additional reply comments in January 2024 here.

Net neutrality and openness are essential parts of how we experience the internet, and as illustrated during the COVID pandemic, can offer important protections – so it shouldn’t come as a surprise that such a majority of Americans support it. Yesterday’s decision reaffirms the internet is and should remain a public resource, where companies cannot abuse their market power to the detriment of consumers, and where actors large and small operate on a level playing field.

Earlier this month, Mozilla participated in a roundtable discussion with experts and allies hosted by Chairwoman Rosenworcel at the Santa Clara County Fire Department. The event location highlighted the importance of net neutrality, as the site where Verizon throttled firefighters’ internet speeds in the midst of fighting a raging wildfire. You can watch the full press conference below, and read coverage of the event here.

We thank the FCC for protecting these vital net neutrality safeguards, and we look forward to seeing the details of the final order when released.

The post Net Neutrality is Back! appeared first on Open Policy & Advocacy.

hacks.mozilla.orgLlamafile’s progress, four months in

When Mozilla’s Innovation group first launched the llamafile project late last year, we were thrilled by the immediate positive response from open source AI developers. It’s become one of Mozilla’s top three most-favorited repositories on GitHub, attracting a number of contributors, some excellent PRs, and a growing community on our Discord server.

Through it all, lead developer and project visionary Justine Tunney has remained hard at work on a wide variety of fundamental improvements to the project. Just last night, Justine shipped the v0.8 release of llamafile, which includes not only support for the very latest open models, but also a number of big performance improvements for CPU inference.

As a result of Justine’s work, today llamafile is both the easiest and fastest way to run a wide range of open large language models on your own hardware. See for yourself: with llamafile, you can run Meta’s just-released LLaMA 3 model–which rivals the very best models available in its size class–on an everyday Macbook.

How did we do it? To explain that, let’s take a step back and tell you about everything that’s changed since v0.1.

tinyBLAS: democratizing GPU support for NVIDIA and AMD

llamafile is built atop the now-legendary llama.cpp project. llama.cpp supports GPU-accelerated inference for NVIDIA processors via the cuBLAS linear algebra library, but that requires users to install NVIDIA’s CUDA SDK. We felt uncomfortable with that fact, because it conflicts with our project goal of building a fully open-source and transparent AI stack that anyone can run on commodity hardware. And besides, getting CUDA set up correctly can be a bear on some systems. There had to be a better way.

With the community’s help (here’s looking at you, @ahgamut and @mrdomino!), we created our own solution: it’s called tinyBLAS, and it’s llamafile’s brand-new and highly efficient linear algebra library. tinyBLAS makes NVIDIA acceleration simple and seamless for llamafile users. On Windows, you don’t even need to install CUDA at all; all you need is the display driver you’ve probably already installed.

But tinyBLAS is about more than just NVIDIA: it supports AMD GPUs, as well. This is no small feat. While AMD commands a respectable 20% of today’s GPU market, poor software and driver support have historically made them a secondary player in the machine learning space. That’s a shame, given that AMD’s GPUs offer high performance, are price competitive, and are widely available.

One of llamafile’s goals is to democratize access to open source AI technology, and that means getting AMD a seat at the table. That’s exactly what we’ve done: with llamafile’s tinyBLAS, you can now easily make full use of your AMD GPU to accelerate local inference. And, as with CUDA, if you’re a Windows user you don’t even have to install AMD’s ROCm SDK.

All of this means that, for many users, llamafile will automatically use your GPU right out of the box, with little to no effort on your part.

CPU performance gains for faster local AI

Here at Mozilla, we are keenly interested in the promise of “local AI,” in which AI models and applications run directly on end-user hardware instead of in the cloud. Local AI is exciting because it opens up the possibility of more user control over these systems and greater privacy and security for users.

But many consumer devices lack the high-end GPUs that are often required for inference tasks. llama.cpp has been a game-changer in this regard because it makes local inference both possible and usably performant on CPUs instead of just GPUs. 

Justine’s recent work on llamafile has now pushed the state of the art even further. As documented in her detailed blog post on the subject, by writing 84 new matrix multiplication kernels she was able to increase llamafile’s prompt evaluation performance by an astonishing 10x compared to our previous release. This is a substantial and impactful step forward in the quest to make local AI viable on consumer hardware.

This work is also a great example of our commitment to the open source AI community. After completing this work we immediately submitted a PR to upstream these performance improvements to llama.cpp. This was just the latest of a number of enhancements we’ve contributed back to llama.cpp, a practice we plan to continue.

Raspberry Pi performance gains

Speaking of consumer hardware, there are few examples that are both more interesting and more humble than the beloved Raspberry Pi. For a bargain basement price, you get a full-featured computer running Linux with plenty of computing power for typical desktop uses. It’s an impressive package, but historically it hasn’t been considered a viable platform for AI applications.

Not any more. llamafile has now been optimized for the latest model (the Raspberry Pi 5), and the result is that a number of small LLMs–such as Rocket-3B (download), TinyLLaMA-1.5B (download), and Phi-2 (download)–run at usable speeds on one of the least expensive computers available today. We’ve seen prompt evaluation speeds of up to 80 tokens/sec in some cases!

Keeping up with the latest models

The pace of progress in the open model space has been stunningly fast. Over the past few months, hundreds of models have been released or updated via fine-tuning. Along the way, there has been a clear trend of ever-increasing model performance and ever-smaller model sizes.

The llama.cpp project has been doing an excellent job of keeping up with all of these new models, frequently rolling-out support for new architectures and model features within days of their release.

For our part we’ve been keeping llamafile closely synced with llama.cpp so that we can support all the same models. Given the complexity of both projects, this has been no small feat, so we’re lucky to have Justine on the case.

Today, you can today use the very latest and most capable open models with llamafile thanks to her hard work. For example, we were able to roll-out llamafiles for Meta’s newest LLaMA 3 models–8B-Instruct and 70B-Instruct–within a day of their release. With yesterday’s 0.8 release, llamafile can also run Grok, Mixtral 8x22B, and Command-R.

Creating your own llamafiles

Since the day that llamafile shipped people have wanted to create their own llamafiles. Previously, this required a number of steps, but today you can do it with a single command, e.g.:

llamafile-convert [model.gguf]

In just moments, this will produce a “model.llamafile” file that is ready for immediate use. Our thanks to community member @chan1012 for contributing this helpful improvement.

In a related development, Hugging Face recently added official support for llamafile within their model hub. This means you can now search and filter Hugging Face specifically for llamafiles created and distributed by other people in the open source community.

OpenAI-compatible API server

Since it’s built on top of llama.cpp, llamafile inherits that project’s server component, which provides OpenAI-compatible API endpoints. This enables developers who are building on top of OpenAI to switch to using open models instead. At Mozilla we very much want to support this kind of future: one where open-source AI is a viable alternative to centralized, closed, commercial offerings.

While open models do not yet fully rival the capabilities of closed models, they’re making rapid progress. We believe that making it easier to pivot existing code over to executing against open models will increase demand and further fuel this progress.

Over the past few months, we’ve invested effort in extending these endpoints, both to increase functionality and improve compatibility. Today, llamafile can serve as a drop-in replacement for OpenAI in a wide variety of use cases.

We want to further extend our API server’s capabilities, and we’re eager to hear what developers want and need. What’s holding you back from using open models? What features, capabilities, or tools do you need? Let us know!

Integrations with other open source AI projects

Finally, it’s been a delight to see llamafile adopted by independent developers and integrated into leading open source AI projects (like Open Interpreter). Kudos in particular to our own Kate Silverstein who landed PRs that add llamafile support to LangChain and LlamaIndex (with AutoGPT coming soon).

If you’re a maintainer or contributor to an open source AI project that you feel would benefit from llamafile integration, let us know how we can help.

Join us!

The llamafile project is just getting started, and it’s also only the first step in a major new initiative on Mozilla’s part to contribute to and participate in the open source AI community. We’ll have more to share about that soon, but for now: I invite you to join us on the llamafile project!

The best place to connect with both the llamafile team at Mozilla and the overall llamafile community is over at our Discord server, which has a dedicated channel just for llamafile. And of course, your enhancement requests, issues, and PRs are always welcome over at our GitHub repo.

I hope you’ll join us. The next few months are going to be even more interesting and unexpected than the last, both for llamafile and for open source AI itself.

 

The post Llamafile’s progress, four months in appeared first on Mozilla Hacks - the Web developer blog.

hacks.mozilla.orgPorting a cross-platform GUI application to Rust

Firefox’s crash reporter is hopefully not something that most users experience often. However, it is still a very important component of Firefox, as it is integral in providing insight into the most visible bugs: those which crash the main process. These bugs offer the worst user experience (since the entire application must close), so fixing them is a very high priority. Other types of crashes, such as content (tab) crashes, can be handled by the browser and reported gracefully, sometimes without the user being aware that an issue occurred at all. But when the main browser process comes to a halt, we need another separate application to gather information about the crash and interact with the user.

This post details the approach we have taken to rewrite the crash reporter in Rust. We discuss the reasoning behind this rewrite, what makes the crash reporter a unique application, the architecture we used, and some details of the implementation.

Why Rewrite?

Even though it is important to properly handle main process crashes, the crash reporter hasn’t received significant development in a while (aside from development to ensure that crash reports and telemetry continue to reliably be delivered)! It has long been stuck in a local maximum of “good enough” and “scary to maintain”: it features 3 individual GUI implementations (for Windows, GTK+ for Linux, and macOS), glue code abstracting a few things (mostly in C++, and Objective-C for macOS), a binary blob produced by obsoleted Apple development tools, and no test suite. Because of this, there is a backlog of features and improvements which haven’t been acted on.

We’ve recently had a number of successful pushes to decrease crash rates (including both big leaps and many small bug fixes), and the crash reporter has functioned well enough for our needs during this time. However, we’ve reached an inflection point where improving the crash reporter would provide valuable insight to enable us to decrease the crash rate even further. For the reasons previously mentioned, improving the current codebase is difficult and error-prone, so we deemed it appropriate to rewrite the application so we can more easily act on the feature backlog and improve crash reports.

Like many components of Firefox, we decided to use Rust for this rewrite to produce a more reliable and maintainable program. Besides the often-touted memory safety built into Rust, its type system and standard library make reasoning about code, handling errors, and developing cross-platform applications far more robust and comprehensive.

Crash Reporting is an Edge Case

There are a number of features of the crash reporter which make it quite unique, especially compared to other components which have been ported to Rust. For one thing, it is a standalone, individual program; basically no other components of Firefox are used in this way. Firefox itself launches many processes as a means of sandboxing and insulating against crashes, however these processes all talk to one another and have access to the same code base.

The crash reporter has a very unique requirement: it must use as little as possible of the Firefox code base, ideally none! We don’t want it to rely on code which may be buggy and cause the reporter itself to crash. Using a completely independent implementation ensures that when a main process crash does occur, the cause of that crash won’t affect the reporter’s functionality as well.

The crash reporter also necessarily has a GUI. This alone may not separate it from other Firefox components, but we can’t leverage any of the cross-platform rendering goodness that Firefox provides! So we need to implement a cross-platform GUI independent of Firefox as well. You might think we could reach for an existing cross-platform GUI crate, however we have a few reasons not to do so.

  • We want to minimize the use of external code: to improve crash reporter reliability (which is paramount), we want it to be as simple and auditable as possible.
  • Firefox vendors all dependencies in-tree, so we are hesitant to bring in large dependencies (GUI libraries are likely pretty sizable).
  • There are only a few third-party crates that provide a native OS look and feel (or actually use native GUI APIs): it’s desirable for the crash reporter to have a native feel to be familiar to users and take advantage of accessibility features.

So all of this is to say that third-party cross-platform GUI libraries aren’t a favorable option.

These requirements significantly narrow the approach that can be used.

Building a GUI View Abstraction

In order to make the crash reporter more maintainable (and make it easier to add new features in the future), we want to have as minimal and generic platform-specific code as possible. We can achieve this by using a simple UI model that can be converted into native GUI code for each platform. Each UI implementation will need to provide two methods (over arbitrary platform-specific &self data):

/// Run a UI loop, displaying all windows of the application until it terminates.
fn run_loop(&self, app: model::Application)

/// Invoke a function asynchronously on the UI loop thread.
fn invoke(&self, f: model::InvokeFn)

The run_loop function is pretty self-explanatory: the UI implementation takes an Application model (which we’ll discuss shortly) and runs the application, blocking until the application is complete. Conveniently, our target platforms generally have similar assumptions around threading: the UI runs in a single thread and typically runs an event loop which blocks on new events until an event signaling the end of the application is received.

There are some cases where we’ll need to run a function on the UI thread asynchronously (like displaying a window, updating a text field, etc). Since run_loop blocks, we need the invoke method to define how to do this. This threading model will make it easy to use the platform GUI frameworks: everything calling native functions will occur on a single thread (the main thread in fact) for the duration of the program.

This is a good time to be a bit more specific about exactly what each UI implementation will look like. We’ll discuss pain points for each later on. There are 4 UI implementations:

  • A Windows implementation using the Win32 API.
  • A macOS implementation using Cocoa (AppKit and Foundation frameworks).
  • A Linux implementation using GTK+ 3 (the “+” has since been dropped in GTK 4, so henceforth I’ll refer to it as “GTK”). Linux doesn’t provide its own GUI primitives, and we already ship GTK with Firefox on Linux to make a modern-feeling GUI, so we can use it for the crash reporter, too. Note that some platforms that aren’t directly supported by Mozilla (like BSDs) use the GTK implementation as well.
  • A testing implementation which will allow tests to hook into a virtual UI and poke things (to simulate interactions and read state).

One last detail before we dive in: the crash reporter (at least right now) has a pretty simple GUI. Because of this, an explicit non-goal of the development was to create a separate Rust GUI crate. We wanted to create just enough of an abstraction to cover the cases we needed in the crash reporter. If we need more controls in the future, we can add them to the abstraction, but we avoided spending extra cycles to fill out every GUI use case.

Likewise, we tried to avoid unnecessary development by allowing some tolerance for hacks and built-in edge cases. For example, our model defines a Button as an element which contains an arbitrary element, but actually supporting that with Win32 or AppKit would have required a lot of custom code, so we special case on a Button containing a Label (which is all we need right now, and an easy primitive available to us). I’m happy to say there aren’t really many special cases like that at all, but we are comfortable with the few that were needed.

The UI Model

Our model is a declarative structuring of concepts mostly present in GTK. Since GTK is a mature library with proven high-level UI concepts, this made it appropriate for our abstraction and made the GTK implementation pretty simple. For instance, the simplest way that GTK does layout (using container GUI elements and per-element margins/alignments) is good enough for our GUI, so we use similar definitions in the model. Notably, this “simple” layout definition is actually somewhat high-level and complicates the macOS and Windows implementations a bit (but this tradeoff is worth the ease of creating UI models).

The top-level type of our UI model is Application. This is pretty simple: we define an Application as a set of top-level Windows (though our application only has one) and whether the current locale is right-to-left. We inspect Firefox resources to use the same locale that Firefox would, so we don’t rely on the native GUI’s locale settings.

As you might expect, each Window contains a single root element. The rest of the model is made up of a handful of typical container and primitive GUI elements:

A class diagram showing the inheritance structure. An Application contains one or more Windows. A Window contains one Element. An Element is subclassed to Checkbox, Label, Progress, TextBox, Button, Scroll, HBox, and VBox types.

The crash reporter only needs 8 types of GUI elements! And really, Progress is used as a spinner rather than indicating any real progress as of right now, so it’s not strictly necessary (but nice to show).

Rust does not explicitly support the object-oriented concept of inheritance, so you might be wondering how each GUI element “extends” Element. The relationship represented in the picture is somewhat abstract; the implemented Element looks like:

pub struct Element {
    pub style: ElementStyle,
    pub element_type: ElementType
}

where ElementStyle contains all the common properties of elements (alignment, size, margin, visibility, and enabled state), and ElementType is an enum containing each of the specific GUI elements as variants.

Building the Model

The model elements are all intended to be consumed by the UI implementations; as such, almost all of the fields have public visibility. However, as a means of having a separate interface for building elements, we define an ElementBuilder<T> type. This type has methods that maintain assertions and provide convenience setters. For instance, many methods accept parameters that are impl Into<MemberType>, some methods like margin() set multiple values (but you can be more specific with margin_top()), etc.

There is a general impl<T> ElementBuilder<T> which provides setters for the various ElementStyle properties, and then each specific element type can also provide their own impl ElementBuilder<SpecificElement> with additional properties unique to the element type.

We combine ElementBuilder<T> with the final piece of the puzzle: a ui! macro. This macro allows us to write our UI in a declarative manner. For example, it allows us to write:

let details_window = ui! {
    Window title("Crash Details") visible(show_details) modal(true) hsize(600) vsize(400)
         halign(Alignment::Fill) valign(Alignment::Fill)
    {
         VBox margin(10) spacing(10) halign(Alignment::Fill) valign(Alignment::Fill) {
            	Scroll halign(Alignment::Fill) valign(Alignment::Fill) {
                	TextBox content(details) halign(Alignment::Fill) valign(Alignment::Fill)
            	},
            	Button halign(Alignment::End) on_click(move || *show_details.borrow_mut() = false)
             {
                 Label text("Ok")
             }
         }
     }
};

The implementation of ui! is fairly simple. The first identifier provides the element type and an ElementBuilder<T> is created. After that, the remaining method-call-like syntax forms are called on the builder (which is mutable).

Optionally, a final set of curly braces indicate that the element has children. In that case, the macro is recursively called to create them, and add_child is called on the builder with the result (so we just need to make sure a builder has an add_child method). Ultimately the syntax transformation is pretty simple, but I believe that this macro is a little bit more than just syntax sugar: it makes reading and editing the UI a fair bit clearer, since the hierarchy of elements is represented in the syntax. Unfortunately a downside is that there’s no way to support automatic formatting of such macro DSLs, so developers will need to maintain a sane formatting.

So now we have a model defined and a declarative way of building it. But we haven’t discussed any dynamic runtime behaviors here. In the above example, we see an on_click handler being set on a Button. We also see things like the Window’s visible property being set to a show_details value which is changed when on_click is pressed. We hook into this declarative UI to change or react to events at runtime using a set of simple data binding primitives with which UI implementations can interact.

Many GUI frameworks nowadays (both for Rust and other languages) have been built with the “diffing element trees” architecture (think React), where your code is (at least mostly) functional and side-effect-free and produces the GUI view as a function of the current state. This approach has its tradeoffs: for instance, it makes complicated, stateful alterations of the layout very simple to write, understand, and maintain, and encourages a clean separation of model and view! However since we aren’t writing a framework, and our application is and will remain fairly simple, the benefits of such an architecture were not worth the additional development burden. Our implementation is more similar to the MVVM architecture:

  • the model is, well, the model discussed here;
  • the views are the various UI implementations; and
  • the viewmodel is (loosely, if you squint) the collection of data bindings.

Data Binding

There are a few types which we use to declare dynamic (runtime-changeable) values. In our UI, we needed to support a few different behaviors:

  • triggering events, i.e., what happens when a button is clicked,
  • synchronized values which will mirror and notify of changes to all clones, and
  • on-demand values which can be queried for the current value.

On-demand values are used to get textbox contents rather than using a synchronized value, in an effort to avoid implementing debouncing in each UI. It may not be terribly difficult to do so, but it also wasn’t difficult to support the on-demand implementation.

As a means of convenience, we created a Property type which encompasses the value-oriented fields as well. A Property<T> can be set to either a static value (T), a synchronized value (Synchronized<T>), or an on-demand value (OnDemand<T>). It supports an impl From for each of these, so that builder methods can look like fn my_method(&mut self, value: impl Into<Property<T>>) allowing any supported value to be passed in a UI declaration.

We won’t discuss the implementation in depth (it’s what you’d expect), but it’s worth noting that these are all Clone to easily share the data bindings: they use Rc (we don’t need thread safety) and RefCell as necessary to access callbacks.

In the example from the last section, show_details is a Synchronized<bool> value. When it changes, the UI implementations change the associated window visibility. The Button on_click callback sets the synchronized value to false, hiding the window (note that the details window used in this example is never closed, it is just shown and hidden).

In a former iteration, data binding types had a lifetime parameter which specified the lifetime for which event callbacks were valid. While we were able to make this work, it greatly complicated the code, especially because there’s no way to communicate the correct covariance of the lifetime to the compiler, so there was additional unsafe code transmuting lifetimes (though it was contained as an implementation detail). These lifetimes were also infectious, requiring some of the complicated semantics regarding their safety to be propagated into the model types which stored Property fields.

Much of this was to avoid cloning values into the callbacks, but changing these types to all be Clone and store static-lifetime callbacks was worth making the code far more maintainable.

Threading and Thread Safety

The careful reader might remember that we discussed how our threading model involves interacting with the UI implementations only on the main thread. This includes updating the data bindings, since the UI implementations might have registered callbacks on them! While we could run everything in the main thread, it’s generally a much better experience to do as much off of the UI thread as possible, even if we don’t do much that’s blocking (though we will be blocking when we send crash reports). We want our business logic to default to being off of the main thread so that the UI doesn’t ever freeze. We can guarantee this with some careful design.

The simplest way to guarantee this behavior is to put all of the business logic in one (non-Clone, non-Sync) type (let’s call it Logic) and construct the UI and UI state (like Property values) in another type (let’s call it UI). We can then move the Logic value into a separate thread to guarantee that UI can’t interact with Logic directly, and vice versa. Of course we do need to communicate sometimes! But we want to ensure that this communication will always be delegated to the thread which owns the values (rather than the values directly interacting with each other).

We can accomplish this by creating an enqueuing function for each type and storing that in the opposite type. Such a function will be passed boxed functions to run on the owning thread that get a reference to the owned type (e.g., Box<dyn FnOnce(&T) + Send + 'static>). This is simple to create: for the UI thread, it is just the UI implementation’s invoke method which we briefly discussed previously. The Logic thread does nothing but run a loop which will get these functions and run them on the owned value (we just enqueue and pass them using an mpsc::channel). Now each type can asynchronously call methods on the other with the guarantee that they’ll be run on the correct thread.

In a former iteration, a more complicated scheme was used with thread-local storage and a central type which was responsible for both creating threads and delegating the functions. But with such a basic use case as two threads delegating between each other, we were able to distill this to the essential aspects needed, greatly simplifying the code.

Localization

One nice benefit of this rewrite is that we could bring the localization of the crash reporter up to speed with our modern tooling. In almost every other part of Firefox, we use fluent to handle localization. Using fluent in the crash reporter makes the experience of localizers more uniform and predictable; they do not need to understand more than one localization system (the crash reporter was one of the last holdouts of the old system). It was very easy to use in the new code, with just a bit of extra code to extract the localization files from the Firefox installation when the crash reporter is run. In the worst case scenario where we can’t find or access these files, we have the en-US definitions directly bundled in the crash reporter binary.

The UI Implementations

We won’t go into much detail about the implementations, but it’s worth talking about each a bit.

Linux (GTK)

The GTK implementation is probably the most straightforward and succinct. We use bindgen to generate Rust bindings to the GTK functions we need (avoiding vendoring any external crates). Then we simply call all of the corresponding GTK functions to set up the GTK widgets as described in the model (remember, the model was made to mirror some of the GTK concepts).

Since GTK is somewhat modern and meant to be written by humans (not automated tools like some of the other platforms), there weren’t really any pain points or unusual behaviors that needed to be addressed.

We have a handful of nice features to improve memory safety and correctness. A set of traits makes it easy to attach owned data to GObjects (ensuring data remains valid and is properly dropped when the GObject is destroyed), and a few macros set up the glue code between GTK signals and our data binding types.

Windows (Win32)

The Windows implementation may have been the most difficult to write, since Win32 GUIs are very rarely written nowadays and the API shows its age. We use the windows-sys crate to access bindings to the API (which was already vendored in the codebase for many other Windows API uses). This crate is generated directly from Windows function metadata (by Microsoft), but otherwise its bindings aren’t terribly different from what bindgen might have produced (though they are likely a bit more accurate).

There were a number of hurdles to overcome. For one thing, the Win32 API doesn’t provide any layout primitives, so the high-level layout concepts we use (which allow graceful resize/repositioning) had to be implemented manually. There’s also quite a few extra API calls just to get to a GUI that looks somewhat decent (correct window colors, font smoothing, high DPI handling, etc). Even the default font ends up being a terrible looking bitmapped font rather than the more modern system default; we needed to manually retrieve the system default and set it as the font to use, which was a bit surprising!

We have a set of traits to facilitate creating custom window classes and managing associated window data of class instances. We also have wrapper types to properly manage the lifetimes of handles and perform type conversions (mainly String to null-terminated wide strings and back) as an extra layer of safety around the API.

macOS (Cocoa/AppKit)

The macOS implementation had its tricky parts, as overwhelmingly macOS GUIs are written with XCode and there’s a lot of automated and generated portions (such as nibs). We again use bindgen to generate Rust bindings, this time for the Objective-C APIs in macOS framework headers.

Unlike Windows and GTK, you don’t get keyboard shortcuts like Cmd-C, Cmd-Q, etc, for free if creating a GUI without e.g. XCode (which generates it for you as part of a new project template). To have these typical shortcuts that users expect, we needed to manually implement the application main menu (which is what governs keyboard shortcuts). We also had to handle runtime setup like creating Objective-C autorelease pools, bringing the window and application (which are separate concepts) to the foreground, etc. Even implementing invoke to call a function on the main thread had its nuances, since modal windows use a nested event loop which would not call queued functions under the default NSRunLoop mode.

We wrote some simple helper types and a macro to make it easy to implement, register, and create Objective-C classes from Rust code. We used this for creating delegate classes as well as subclassing some controls for the implementation (like NSButton); it made it easy to safely manage the memory of Rust values underlying the classes and correctly register class method selectors.

The Test UI

We’ll discuss testing in the next section. Our testing UI is very simple. It doesn’t create a GUI, but allows us to interact directly with the model. The ui! macro supports an extra piece of syntax when tests are enabled to optionally set a string identifier for each element. We use these strings in unit tests to access and interact with the UI. The data binding types also support a few additional methods in tests to easily manipulate values. This UI allows us to simulate button presses, field entry, etc, to ensure that other UI state changes as expected as well as simulating the system side effects.

Mocking and Testing

An important goal of our rewrite was to add tests to the crash reporter; our old code was sorely lacking them (in part because unit testing GUIs is notoriously difficult).

Mocking Everything

In the new code, we can mock the crash reporter regardless of whether we are running tests or not (though it is always mocked for tests). This is important because mocking allows us to (manually) run the GUI in various states to check that the GUI implementations are correct and render well. Our mocking not only mocks the inputs to the crash reporter (environment variables, command line parameters, etc), it also mocks all side-effectful std functions.

We accomplish this by having a std module in the crate, and using crate::std throughout the rest of the code. When mocking is disabled, crate::std is simply the same as ::std. But when it is enabled, a bunch of functions that we have written are used instead. These mock the filesystem, environment, launching external commands, and other side effects. Importantly, only the minimal amount to mock the existing functions is implemented, so that if e.g. some new functions from std::fs, std::net, etc. are used, the crate will fail to compile with mocking enabled (so that we don’t miss any side effects). This might sound like a lot of effort, but you might be surprised at how little of std really needed to be mocked, and most implementations were pretty straightforward.

Now that we have our code using different mocked functions, we need to have a way of injecting the desired mock data (both in tests and in our normal mocked operation). For example, we have the ability to return some data when a File is read, but we need to be able to set that data differently for tests. Without going into too much detail, we accomplish this using a thread-local store of mock data. This way, we don’t need to change any code to accommodate the mock data; we only need to make changes where we set and retrieve it. The programming language enthusiasts out there may recognize this as a form of dynamic scoping. The implementation allows our mock data to be set with code like

mock::builder()
    .set(
        crate::std::env::MockCurrentExe,
        "work_dir/crashreporter".into(),
    )
    .run(|| crash_reporter_main())

in tests, and

pub fn current_exe() -> std::io::Result {
    Ok(MockCurrentExe.get(|r| r.clone()))
}

in our crate::std::env implementation.

Testing

With our mocking setup and test UI, we are able to extensively test the behavior of the crash reporter. The “last mile” of this testing which we can’t automate easily is whether each UI implementation faithfully represents the UI model. We manually test this with a mocked GUI for each platform.

Besides that, we are able to automatically test how arbitrary UI interactions cause the crash reporter to affect its own UI state and the environment (checking which programs are invoked and network connections are made, what happens if they fail, succeed, or timeout, etc). We also set up a mock filesystem and add assertions in various scenarios over the precise resulting filesystem state once the crash reporter completes. This greatly increases our confidence in the current behaviors and ensures that future changes will not alter them, which is of the utmost importance for such an essential component of our crash reporting pipeline.

The End Product

Of course we can’t get away with writing all of this without a picture of the crash reporter! This is what it looks like on Linux using GTK. The other GUI implementations look the same but styled with a native look and feel.

The crash reporter dialog on Linux.

Note that, for now, we wanted to keep it looking exactly the same as it previously did. So if you are unfortunate enough to see it, it shouldn’t appear as if anything has changed!

With a new, cleaned up crash reporter, we can finally unblock a number of feature requests and bug reports, such as:

We are excited to iterate and improve further on crash reporter functionality. But ultimately it’d be wonderful if you never see or use it, and we are constantly working toward that goal!

The post Porting a cross-platform GUI application to Rust appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla Thunderbird BlogAdventures In Rust: Bringing Exchange Support To Thunderbird

Microsoft Exchange is a popular choice of email service for corporations and educational institutions, and so it’s no surprise that there’s demand among Thunderbird users to support Exchange. Until recently, this functionality was only available through an add-on. But, in the next ESR (Extended Support) release of Thunderbird in July 2024, we expect to provide this support natively within Thunderbird. Because of the size of this undertaking, the first roll-out of the Exchange support will initially cover only email, with calendar and address book support coming at a later date.

This article will go into technical detail on how we are implementing support for the Microsoft Exchange Web Services mail protocol, and some idea of where we’re going next with the knowledge gained from this adventure.

Historical context

Thunderbird is a long-lived project, which means there’s lots of old code. The current architecture for supporting mail protocols predates Thunderbird itself, having been developed more than 20 years ago as part of Netscape Communicator. There was also no paid maintainership from about 2012 — when Mozilla divested and  transferred ownership of Thunderbird to its community — until 2017, when Thunderbird rejoined the Mozilla Foundation. That means years of ad hoc changes without a larger architectural vision and a lot of decaying C++ code that was not using modern standards.

Furthermore, in the entire 20 year lifetime of the Thunderbird project, no one has added support for a new mail protocol before. As such, no one has updated the architecture as mail protocols change and adapt to modern usage patterns, and a great deal of institutional knowledge has been lost. Implementing this much-needed feature is the first organization-led effort to actually understand and address limitations of Thunderbird’s architecture in an incremental fashion.

Why we chose Rust

Thunderbird is a large project maintained by a small team, so choosing a language for new work cannot be taken lightly. We need powerful tools to develop complex features relatively quickly, but we absolutely must balance this with long-term maintainability. Selecting Rust as the language for our new protocol support brings some important benefits:

  1. Memory safety. Thunderbird takes input from anyone who sends an email, so we need to be diligent about keeping security bugs out.
  2. Performance. Rust runs as native code with all of the associated performance benefits.
  3. Modularity and Ecosystem. The built-in modularity of Rust gives us access to a large ecosystem where there are already a lot of people doing things related to email which we can benefit from.

The above are all on the standard list of benefits when discussing Rust. However, there are some additional considerations for Thunderbird:

  1. Firefox. Thunderbird is built on top of Firefox code and we use a shared CI infrastructure with Firefox which already enables Rust. Additionally, Firefox provides a language interop layer called XPCOM (Cross-Platform Component Object Model), which has Rust support and allows us to call between Rust, C++, and JavaScript.
  2. Powerful tools. Rust gives us a large toolbox for building APIs which are difficult to misuse by pushing logical errors into the domain of the compiler. We can easily avoid circular references or provide functions which simply cannot be called with values which don’t make sense, letting us have a high degree of confidence in features with a large scope. Rust also provides first-class tooling for documentation, which is critically important on a small team.
  3. Addressing architectural technical debt. Introducing a new language gives us a chance to reconsider some aging architectures while benefiting from a growing language community.
  4. Platform support and portability. Rust supports a broad set of host platforms. By building modular crates, we can reuse our work in other projects, such as Thunderbird for Android/K-9 Mail.

Some mishaps along the way

Of course, the endeavor to introduce our first Rust component in Thunderbird is not without its challenges, mostly related to the size of the Thunderbird codebase. For example, there is a lot of existing code with idiosyncratic asynchronous patterns that don’t integrate nicely with idiomatic Rust. There are also lots of features and capabilities in the Firefox and Thunderbird codebase that don’t have any existing Rust bindings.

The first roadblock: the build system

Our first hurdle came with getting any Rust code to run in Thunderbird at all. There are two things you need to know to understand why:

First, since the Firefox code is a dependency of Thunderbird, you might expect that we pull in their code as a subtree of our own, or some similar mechanism. However, for historical reasons, it’s the other way around: building Thunderbird requires fetching Firefox’s code, fetching Thunderbird’s code as a subtree of Firefox’s, and using a build configuration file to point into that subtree.

Second, because Firefox’s entrypoint is written in C++ and Rust calls happen via an interoperability layer, there is no single point of entry for Rust. In order to create a tree-wide dependency graph for Cargo and avoid duplicate builds or version/feature conflicts, Firefox introduced a hack to generate a single Cargo workspace which aggregates all the individual crates in the tree.

In isolation, neither of these is a problem in itself. However, in order to build Rust into Thunderbird, we needed to define our own Cargo workspace which lives in our tree, and Cargo does not allow nesting workspaces. To solve this issue, we had to define our own workspace and add configuration to the upstream build tool, mach, to build from this workspace instead of Firefox’s. We then use a newly-added mach subcommand to sync our dependencies and lockfile with upstream and to vendor the resulting superset.

XPCOM

While the availability of language interop through XPCOM is important for integrating our frontend and backend, the developer experience has presented some challenges. Because XPCOM was originally designed with C++ in mind, implementing or consuming an XPCOM interface requires a lot of boilerplate and prevents us from taking full advantage of tools like rust-analyzer. Over time, Firefox has significantly reduced its reliance on XPCOM, making a clunky Rust+XPCOM experience a relatively minor consideration. However, as part of the previously-discussed maintenance gap, Thunderbird never undertook a similar project, and supporting a new mail protocol requires implementing hundreds of functions defined in XPCOM.

Existing protocol implementations ease this burden by inheriting C++ classes which provide the basis for most of the shared behavior. Since we can’t do this directly, we are instead implementing our protocol-specific logic in Rust and communicating with a bridge class in C++ which combines our Rust implementations (an internal crate called ews_xpcom) with the existing code for shared behavior, with as small an interface between the two as we can manage.

Please visit our documentation to learn more about how to create Rust components in Thunderbird.

Implementing Exchange support with Rust

Despite the technical hiccups experienced along the way, we were able to clear the hurdles, use, and build Rust within Thunderbird. Now we can talk about how we’re using it and the tools we’re building. Remember all the way back to the beginning of this blog post, where we stated that our goal is to support Microsoft’s Exchange Web Services (EWS) API. EWS communicates over HTTP with request and response bodies in XML.

Sending HTTP requests

Firefox already includes a full-featured HTTP stack via its necko networking component. However, necko is written in C++ and exposed over XPCOM, which as previously stated does not make for nice, idiomatic Rust. Simply sending a GET request requires a great deal of boilerplate, including nasty-looking unsafe blocks where we call into XPCOM. (XPCOM manages the lifetime of pointers and their referents, ensuring memory safety, but the Rust compiler doesn’t know this.) Additionally, the interfaces we need are callback-based. For making HTTP requests to be simple for developers, we need to do two things:

  1. Support native Rust async/await syntax. For this, we added a new Thunderbird-internal crate, xpcom_async. This is a low-level crate which translates asynchronous operations in XPCOM into Rust’s native async syntax by defining callbacks to buffer incoming data and expose it by implementing Rust’s Future trait so that it can be awaited by consumers. (If you’re not familiar with the Future concept in Rust, it is similar to a JS Promise or a Python coroutine.)
  2. Provide an idiomatic HTTP API. Now that we had native async/await support, we created another internal crate (moz_http) which provides an HTTP client inspired by reqwest. This crate handles creating all of the necessary XPCOM objects and providing Rustic error handling (much nicer than the standard XPCOM error handling).

Handling XML requests and responses

The hardest task in working with EWS is translating between our code’s own data structures and the XML expected/provided by EWS. Existing crates for serializing/deserializing XML didn’t meet our needs. serde’s data model doesn’t align well with XML, making distinguishing XML attributes and elements difficult. EWS is also sensitive to XML namespaces, which are completely foreign to serde. Various serde-inspired crates designed for XML exist, but these require explicit annotation of how to serialize every field. EWS defines hundreds of types which can have dozens of fields, making that amount of boilerplate untenable.

Ultimately, we found that existing serde-based implementations worked fine for deserializing XML into Rust, but we were unable to find a satisfactory tool for serialization. To that end, we introduced another new crate, xml_struct. This crate defines traits governing serialization behavior and uses Rust’s procedural derive macros to automatically generate implementations of these traits for Rust data structures. It is built on top of the existing quick_xml crate and designed to create a low-boilerplate, intuitive mapping between XML and Rust.  While it is in the early stages of development, it does not make use of any Thunderbird/Firefox internals and is available on GitHub.

We have also introduced one more new crate, ews, which defines types for working with EWS and an API for XML serialization/deserialization, based on xml_struct and serde. Like xml_struct, it is in the early stages of development, but is available on GitHub.

Overall flow chart

Below, you can find a handy flow chart to help understand the logical flow for making an Exchange request and handling the response. 

A bird's eye view of the flow

Fig 1. A bird’s eye view of the flow

What’s next?

Testing all the things

Before landing our next major features, we are taking some time to build out our automated tests. In addition to unit tests, we just landed a mock EWS server for integration testing. The current focus on testing is already paying dividends, having exposed a couple of crashes and some double-sync issues which have since been rectified. Going forward, new features can now be easily tested and verified.

Improving error handling

While we are working on testing, we are also busy improving the story around error handling. EWS’s error behavior is often poorly documented, and errors can occur at multiple levels (e.g., a request may fail as a whole due to throttling or incorrect structure, or parts of a request may succeed while other parts fail due to incorrect IDs). Some errors we can handle at the protocol level, while others may require user intervention or may be intractable. In taking the time now to improve error handling, we can provide a more polished implementation and set ourselves up for easier long-term maintenance.

Expanding support

We are working on expanding protocol support for EWS (via ews and the internal ews_xpcom crate) and hooking it into the Thunderbird UI. Earlier this month, we landed a series of patches which allow adding an EWS account to Thunderbird, syncing the account’s folder hierarchy from the remote server, and displaying those folders in the UI. (At present, this alpha-state functionality is gated behind a build flag and a preference.) Next up, we’ll work on fetching message lists from the remote server as well as generalizing outgoing mail support in Thunderbird.

Documentation

Of course, all of our work on maintainability is for naught if no one understands what the code does. To that end, we’re producing documentation on how all of the bits we have talked about here come together, as well as describing the existing architecture of mail protocols in Thunderbird and thoughts on future improvements, so that once the work of supporting EWS is done, we can continue building and improving on the Thunderbird you know and love.

QUESTIONS FROM YOU
EWS is deprecated for removal in 2026. Are there plans to add support for Microsoft Graph into Thunderbird?

This is a common enough question that we probably should have addressed it in the post! EWS will no longer be available for Exchange Online in October 2026, but our research in the lead-up to this project showed that there’s a significant number of users who are still using on-premise installs of Exchange Server. That is, many companies and educational institutions are running Exchange Server on their own hardware.

These on-premise installs largely support EWS, but they cannot support the Azure-based Graph API. We expect that this will continue to be the case for some time to come, and EWS provides a means of supporting those users for the foreseeable future. Additionally, we found a few outstanding issues with the Graph API (which is built with web-based services in mind, not desktop applications), and adding EWS support allows us to take some extra time to find solutions to those problems before building Graph API support.

Diving into the past has enabled a sound engineering-led strategy for dealing with the future: Thanks to the deep dive into the existing Thunderbird architecture we can begin to leverage more efficient and productive patterns and technologies when implementing protocols.

In time this will have far reaching consequences for the Thunderbird code base which will not only run faster and more reliably, but significantly reduce maintenance burden when landing bug fixes and new features.

Rust and EWS are elements of a larger effort in Thunderbird to reduce turnarounds and build resilience into the very core of the software.

The post Adventures In Rust: Bringing Exchange Support To Thunderbird appeared first on The Thunderbird Blog.

Firefox UXOn Purpose: Collectively Defining Our Team’s Mission Statement

How the Firefox User Research team crafted our mission statement

<figcaption>Firefox illustration by UX designer Gabrielle Lussier</figcaption>

Like many people who work at Mozilla, I’m inspired by the organization’s mission: to ensure the Internet is a global public resource, open and accessible to all. In thinking about the team I belong to, though, what’s our piece of this bigger puzzle?

The Firefox User Research team tackled this question early last year. We gathered in person for a week of team-focused activities; defining a team mission statement was on the agenda. As someone who enjoys workshop creation and strategic planning, I was on point to develop the workshop. The end goal? A team-backed statement that communicated our unique purpose and value.

Mission statement development was new territory for me. I read up on approaches for creating them and landed on a workshop design (adapted from MITRE’s Innovation Toolkit) that would enable the team to participate in a process of collectively reflecting on our work and defining our shared purpose.

To my delight, the workshop was fruitful and engaging. Not only did it lead us to a statement that resonates, it sparked meaningful discussion along the way.

Here, I outline the five workshop activities that guided us there.

1) Discuss the value of a good mission statement

We kicked off the workshop by discussing the value of a well-crafted statement. Why were we aiming to define one in the first place? Benefits include: fostering alignment between the team’s activities and objectives, communicating the team’s purpose, and helping the team to cohere around a shared direction. In contrast to a vision statement, which describes future conditions in aspirational language, a mission statement describes present conditions in concrete terms.

In our case, the team had recently grown in size to thirteen people. We had a fairly new leadership team, along with a few new members of the team. With a mix of longer tenure and newer members, and quantitative and mixed methods researchers (which at one point in the past had been on separate teams), we wanted to inspire team alignment around our shared goals and build bridges between team members.

2) Individually answer a set of questions about our team’s work

Large sheets of paper were set up around the room with the following questions:

A. What do we, as a user research team, do?

B. How do we do what we do?

C. What value do we bring?

D. Who benefits from our work?

E. Why does our team exist?

Markers in hand, team members dispersed around the room, spending a few minutes writing answers to each question until we had cycled through them all.

<figcaption>Team members during the workshop</figcaption>

3) Highlight keywords and work in groups to create draft statements

Small groups were formed and were tasked with highlighting keywords from the answers provided in the previous step. These keywords served as the foundation for drafting statements, with the following format provided as a helpful guide:

Our mission is to (A— what we do) by (B— how we do it).

We (C — the value we bring) so that (D — who benefits from our work ) can (E — why we exist).

<figcaption>One group’s draft statement from Step 3</figcaption>

4) Review and discuss resulting statements

Draft statements emerged remarkably fluidly from the activities in Steps 2 and 3. Common elements were easy to identify (we develop insights and shape product decisions), while the differences sparked worthwhile discussions. For example: How well does the term ‘human-centered’ capture the work of our quantitative researchers? Is creating empathy for our users a core part of our purpose? How does our value extend beyond impacting product decisions?

As a group, we reviewed and discussed the statements, crossing out any jargony terms and underlining favoured actions and words. After this step, we knew we were close to a final statement. We concluded the workshop, with a plan to revisit the statements when we were back to work the following week.

5) Refine and share for feedback

The following week, we refined our work and shared the outcome with the lead of our Content Design practice for review. Her sharp feedback included encouraging us to change the phrase ‘informing strategic decisions’ to ‘influencing strategic decisions’ to articulate our role as less passive — a change we were glad to make. After another round of editing, we arrived at our final mission statement:

Our mission is to influence strategic decisions through systematic, qualitative, and quantitative research. We develop insights that uncover opportunities for Mozilla to build an open and healthy internet for all.

Closing thoughts

If you’re considering involving your team in defining a team mission statement, it makes for a rewarding workshop activity. The five steps presented in this article allow team members to reflect on important foundational questions (what value do we bring?), while deepening mutual understanding.

Crafting a team mission statement was much less of an exercise in wordsmithing than I might have assumed. Instead, it was an exercise in aligning on the bigger questions of why we exist and who benefits from our work. I walked away with a better understanding of the value our team brings to Mozilla, a clearer way to articulate how our work ladders up to the organization’s mission, and a deeper appreciation for the individual perspectives of our team members.


On Purpose: Collectively Defining Our Team’s Mission Statement was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Mozilla UXOn Purpose: Collectively Defining Our Team’s Mission Statement

How the Firefox User Research team crafted our mission statement

Image of person hugging Firefox logo

Firefox illustration by UX designer Gabrielle Lussier

Like many people who work at Mozilla, I’m inspired by the organization’s mission: to ensure the Internet is a global public resource, open and accessible to all. In thinking about the team I belong to, though, what’s our piece of this bigger puzzle?

The Firefox User Research team tackled this question early last year. We gathered in person for a week of team-focused activities; defining a team mission statement was on the agenda. As someone who enjoys workshop creation and strategic planning, I was on point to develop the workshop. The end goal? A team-backed statement that communicated our unique purpose and value.

Mission statement development was new territory for me. I read up on approaches for creating them and landed on a workshop design (adapted from MITRE’s Innovation Toolkit) that would enable the team to participate in a process of collectively reflecting on our work and defining our shared purpose.

To my delight, the workshop was fruitful and engaging. Not only did it lead us to a statement that resonates, it sparked meaningful discussion along the way.

Here, I outline the five workshop activities that guided us there.

1) Discuss the value of a good mission statement

We kicked off the workshop by discussing the value of a well-crafted statement. Why were we aiming to define one in the first place? Benefits include: fostering alignment between the team’s activities and objectives, communicating the team’s purpose, and helping the team to cohere around a shared direction. In contrast to a vision statement, which describes future conditions in aspirational language, a mission statement describes present conditions in concrete terms.

In our case, the team had recently grown in size to thirteen people. We had a fairly new leadership team, along with a few new members of the team. With a mix of longer tenure and newer members, and quantitative and mixed methods researchers (which at one point in the past had been on separate teams), we wanted to inspire team alignment around our shared goals and build bridges between team members.

2) Individually answer a set of questions about our team’s work

Large sheets of paper were set up around the room with the following questions:

A. What do we, as a user research team, do?

B. How do we do what we do?

C. What value do we bring?

D. Who benefits from our work?

E. Why does our team exist?

Markers in hand, team members dispersed around the room, spending a few minutes writing answers to each question until we had cycled through them all.

People in a workshop

Team members during the workshop

3) Highlight keywords and work in groups to create draft statements

Small groups were formed and were tasked with highlighting keywords from the answers provided in the previous step. These keywords served as the foundation for drafting statements, with the following format provided as a helpful guide:

Our mission is to (A — what we do) by (B — how we do it).

We (C — the value we bring) so that (D — who benefits from our work ) can (E — why we exist).

One group’s draft statement from Step 3

4) Review and discuss resulting statements

Draft statements emerged remarkably fluidly from the activities in Steps 2 and 3. Common elements were easy to identify (we develop insights and shape product decisions), while the differences sparked worthwhile discussions. For example: How well does the term ‘human-centered’ capture the work of our quantitative researchers? Is creating empathy for our users a core part of our purpose? How does our value extend beyond impacting product decisions?

As a group, we reviewed and discussed the statements, crossing out any jargony terms and underlining favoured actions and words. After this step, we knew we were close to a final statement. We concluded the workshop, with a plan to revisit the statements when we were back to work the following week.

5) Refine and share for feedback

The following week, we refined our work and shared the outcome with the lead of our Content Design practice for review. Her sharp feedback included encouraging us to change the phrase ‘informing strategic decisions’ to ‘influencing strategic decisions’ to articulate our role as less passive — a change we were glad to make. After another round of editing, we arrived at our final mission statement:

Our mission is to influence strategic decisions through systematic, qualitative, and quantitative research. We develop insights that uncover opportunities for Mozilla to build an open and healthy internet for all.

Closing thoughts

If you’re considering involving your team in defining a team mission statement, it makes for a rewarding workshop activity. The five steps presented in this article give team members the opportunity to reflect on important foundational questions (what value do we bring?), while deepening mutual understanding.

Crafting a team mission statement was much less of an exercise in wordsmithing than I might have assumed. Instead, it was an exercise in aligning on the bigger questions of why we exist and who benefits from our work. I walked away with a better understanding of the value our team brings to Mozilla, a clearer way to articulate how our work ladders up to the organization’s mission, and a deeper appreciation for the individual perspectives of our team members.

SUMO BlogFreshening up the Knowledge Base for spring 2024

Hello, SUMO community!

This spring we’re happy to announce that we’re refreshing the Mozilla Firefox Desktop and Mobile knowledge bases. This is a project that we’ve been working on for the past several months and now, we’re ready to finally share it with you all! We’ve put together a video to walk you through what these changes mean for SUMO and how they’ll impact you.

Introduction of Article Categories

When exploring our knowledge base, we realized there’s so many articles and it’s important to set expectations for users. We’ll be introducing three article types:

  • About – Article that aims to be educational and informs the reader about a certain feature.
  • How To – Article that aims to teach a user how to interact with a feature or complete a task.
  • Troubleshooting – Article that aims to provide solutions to an issue a user might encounter.
  • FAQ – Article that focuses on answering frequently asked questions that a user might have.

We will standardize titles and how articles are formatted per category, so users know what to expect when interacting with an article.

Downsizing and concentration of articles

There’s hundreds upon hundreds of articles in our knowledge base. However, many of them are repetitive and contain similar information. We want to reduce the number of articles and improve the quality of our content. We will be archiving articles and revising active articles throughout this refresh.

Style guideline update focus on reducing cognitive load

As mentioned in a previous post, we will be updating the style guideline and aiming to reduce the cognitive load on users by introducing new style guidelines like in-line images. There’s not huge changes, but we’ll go over them more when we release the updated style guidelines.

With all this coming up, we hope you join us for the community call today and learn more about the knowledge base refresh today. We hope to collaborate with our community to make this update successful.

Have questions or feedback? Drop us a message in this SUMO forum thread.

The Mozilla Thunderbird BlogApril 2024 Community Office Hours: Rust and Exchange Support

Text "COMMUNITY OFFICE HOURS APRIL 2024: RUST AND EXCHANGE" with a stylized Thunderbird bird icon in shades of blue and a custom community icon Iin the center on a lavender background with abstract circular design elements.

We admit it. Thunderbird is getting a bit Rusty, but in a good way! In our monthly Development Digests, we’ve been updating the community about enabling Rust in Thunderbird to implement native support for Exchange. Now, we’d like to invite you for a chat with Team Thunderbird and the developers making this change possible. As always, send your questions in advance to officehours@thunderbird.net! This is a great way to get answers even if you can’t join live.

Be sure to note the change in day of the week and the UTC time. (At least the time changes are done for now!) We had to shift our calendar a bit to fit everyone’s schedules and time zones!

<figcaption class="wp-element-caption">UPDATE: Watch the entire conversation here. </figcaption>

April Office Hours: Rust and Exchange

This month’s topic is a new and exciting change to the core functionality: using Rust to natively support Microsoft Exchange. Join us and talk with the three key Thunderbird developers responsible for this shiny (rusty) new addition: Sean Burke, Ikey Doherty, and Brendan Abolivier! You’ll find out why we chose Rust, challenges we encountered, how we used Rust to interface with XPCOM and Necko to provide Exchange support. We’ll also give you a peek into some future plans around Rust.

Catch Up On Last Month’s Thunderbird Community Office Hours

While you’re thinking of questions to ask, watch last month’s office hours where we answered some of your frequently asked recent questions. You can watch clips of specific questions and answers on our TILvids channel. If you’d prefer a written summary, this blog post has you covered.

Join The Video Chat

We’ve also got a shiny new Big Blue Button room, thanks to KDE! We encourage everyone to check out their Get Involved page. We’re grateful for their support and to have an open source web conferencing solution for our community office hours.

Date and Time: Tuesday, April 23 at 16:00 UTC

Direct URL to Join: https://meet.thunderbird.net/b/hea-uex-usn-rb1

Access Code: 964573

The post April 2024 Community Office Hours: Rust and Exchange Support appeared first on The Thunderbird Blog.

The Mozilla Thunderbird BlogTeam Thunderbird Answers Your Most Frequently Asked Questions

We know the Thunderbird community has LOTS of questions! We get them on Mozilla Support, Mastodon, and X.com (formerly Twitter). They pop up everywhere, from the Thunderbird subreddit to the teeming halls of conferences like FOSDEM and SCaLE. During our March Community Office Hours, we took your most frequently asked questions to Team Thunderbird and got some answers. If you couldn’t watch the full session, or would rather have the answers in abbreviated text clips, this post is for you!

Thunderbird for Android / K-9 Mail

The upcoming release on Android is definitely on everyone’s mind! We received lots of questions about this at our conference booths, so let’s answer them!

Will there be Exchange support for Thunderbird for Android?

Yes! Implementing Exchange in Rust in the Thunderbird Desktop client will enable us to reuse those Rust crates as shared libraries with the Mobile client. Stay up to date on Exchange support progress via our monthly Developer Digests.

Will Thunderbird Add-ons be available on Android?

Right now, no, they will not be available. K-9 Mail uses a different code base than Thunderbird Desktop. Thunderbird add-ons are designed for a desktop experience, not a mobile one. We want to have add-ons in the future, but this will likely not happen within the next two years.

When Thunderbird for Android launches, will it be available on F-Droid?

It absolutely will.

When Thunderbird for Android is ready to be released, what will the upgrade path look like?

We know some in the K-9 Mail community love their adorable robot dog and don’t want to give him up yet. So we will support K-9 Mail (same code, different brand) in parallel for a year or two, until the product is more mature, and we see that more K-9 Mail users are organically switching.

Because of Android security, users will need to manually migrate from K-9 Mail to Thunderbird for Android, versus an automatic migration. We want to make that effortless and unobtrusive, and the Sync feature using Mozilla accounts will be a large part of that. We are exploring one-tap migration tools that will prompt you to switch easily and keep all your data and settings – and your peace of mind.

Will CalDAV and CardDAV be available on Thunderbird for Android?

Probably! We’re still determining this, but we know our users like having their contacts and calendars inside one app for convenience, as well as out of privacy concerns. While it would be a lot of engineering effort, we understand the reasoning behind these requests. As we consider how to go forward, we’ll release all these explorations and ideas in our monthly updates, where people can give us feedback.

Will the K-9 Mail API provide the ability to download the save preferences that Sync stores locally to plug into automation like Ansible?

Yes! Sync is open source, so users can self-host their own instead of using Mozilla services. This question touches on the differences between data structure for desktop and mobile, and how they handle settings. So this will take a while, but once we have something stable in a beta release, we’ll have articles on how to hook up your own sync server and do your own automation.


Thunderbird for Desktop

When will we have native Exchange support for desktop Thunderbird?

We hope to land this in the next ESR (Extended Support Release), version 128, in limited capacity. Users will still need to use the OWL Add-on for all situations where the standard exchange web service is not available. We don’t yet know if native calendar and address book support will be included in the ESR. We want to support every aspect of Exchange, but there is a lot of code complexity and a history of changes from Microsoft. So our primary goal is good, stable support for email by default, and calendar and address book if possible, for the next ESR.

When will conversations and a true threaded view be added to Thunderbird?

Viewing your own sent emails is an important component of a true conversation view. This is a top priority and we’re actively working towards it. Unfortunately, this requires overhauling the backend database that underlies Thunderbird, which is 20 years old. Our legacy database is not built to handle conversation views with received and sent messages listed in the same thread. Restructuring a two decades old database is not easy. Our goal is to have a new global message database in place by May 31. If nothing has exploded, it should be much easier to enable conversation view in the front end.

When will we get a full sender name column with the raw email address of the sender? This will help further avoid phishing and spam.

We plan to make this available in the next ESR — Thunderbird 128 — which is due July 2024.

Will there ever be a browser-based view of Thunderbird?

Despite our foundations in Firefox, this is a huge effort that would have to be built from scratch. This isn’t on our roadmap and not in our plans for now. If there was a high demand, we might examine how feasible this could be. Alex explains this in more detail during the short video below:

The post Team Thunderbird Answers Your Most Frequently Asked Questions appeared first on The Thunderbird Blog.

hacks.mozilla.orgPrototype even faster with the Gradio UI for Figma component library

As an industry, generative AI is moving quickly, and so requires teams exploring new ideas and technologies to move quickly as well. To do so, we have been using Gradio, a low-code prototyping toolkit from Hugging Face, to spin up experiments and experiences. Gradio has allowed us to validate concepts through prototyping without large investments of time, effort, or infrastructure.

Although Gradio has made the development phase of prototyping easier, the design phase has been largely the same. Even with Gradio, designers have had to create components in Figma, outline expected user flows and behaviors, and hand off designs for developers in the same way they have always done. While working on a recent exploration, we realized something was needed: a set of Figma components based on Gradio that enabled designers to create wireframes quickly.

Today, we are releasing our library of design components for Gradio for others to use. The components are based on version 4.23.0 of Gradio and will be available through our Figma profile: Mozilla Innovation Projects, https://www.figma.com/@futureatmozilla. We hope these components help teams accelerate their discovery and experimentation with ML and generative AI.

You can find out more about Gradio at https://www.gradio.app/ and more about innovation at Mozilla at https://future.mozilla.org

Thanks to Amy Chiu and Anais Ron who created the components and to the Gradio team for their work. Happy designing!

What’s Inside Gradio UI for Figma?

Because Gradio is an ever-changing prototyping kit, current components are based on version 4.23.0 of Gradio. We selected components based on their wide array of potential uses. Here is a list of the components inside the kit:

  • Typography (e.g. headers, body fonts)
  • Iconography (e.g. chevrons, arrows, corner expanders) 

Small Components:

  • Buttons
  • Checkbox
  • Radio
  • Sliders
  • Tabs
  • Accordion
  • Delete Button
  • Error Message
  • Media Type Labels
  • Media Player Controller

Big Components:

  • Label + Textbox
  • Accordion with Label + Input
  • Video Player
  • Label + Counter
  • Label + Slider
  • Accordion + Label
  • Checkbox with Label
  • Radio with Label
  • Accordion with Content
  • Accordion with Label + Input
  • Top navigation

How to Access and Use Gradio UI for Figma

To start using the library, follow these simple steps:

  1. Access the Library: Access the component library directly by visiting our public Figma profile (https://www.figma.com/@futureatmozilla) or by searching for “Gradio UI for Figma” within the Figma Community section of your web or desktop Figma application.
  2. Explore the Documentation: Familiarize yourself with the components and guidelines to make the most out of your design process.
  3. Connect with Us: Connect with us by following our Figma profile or emailing us at innovations@mozilla.com

The post Prototype even faster with the Gradio UI for Figma component library appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla Thunderbird BlogThunderbird for Android / K-9 Mail: March 2024 Progress Report

Featured graphic for "Thunderbird for Android March 2024 Progress Report" with stylized Thunderbird logo and K-9 Mail Android icon, resembling an envelope with dog ears.

If you’ve been wondering how the work to turn K-9 Mail into Thunderbird for Android is coming along, you’ve found the right place. This blog post contains a report of our development activities in March 2024. 

We’ve published monthly progress reports for a while now. If you’re interested in what happened previously, check out February’s progress report. The report for the preceding month is usually linked in the first section of a post. But you can also browse the Android section of our blog to find progress reports and release announcements.

Fixing bugs

For K-9 Mail, new stable releases typically include a lot of changes. K-9 Mail 6.800 was no exception. That means a lot of opportunities to accidentally introduce new bugs. And while we test the app in several ways – manual tests, automated tests, and via beta releases – there’s always some bugs that aren’t caught and make it into a stable version. So we typically spend a couple of weeks after a new major release fixing the bugs reported by our users.

K-9 Mail 6.801

Stop capitalizing email addresses

One of the known bugs was that some software keyboards automatically capitalized words when entering the email address in the first account setup screen. A user opened a bug and provided enough information (❤) for us to reproduce the issue and come up with a fix.

Line breaks in single line text inputs

At the end of the beta phase a user noticed that K-9 Mail wasn’t able to connect to their email account even though they copy-pasted the correct password to the app. It turned out that the text in the clipboard ended with a line break. The single line text input we use for the password field didn’t automatically strip that line break and didn’t give any visual indication that there was one.

While we knew about this issue, we decided it wasn’t important enough to delay the release of K-9 Mail 6.800. After the release we took some time to fix the problem.

DNSSEC? Is anyone using that?

When setting up an account, the app attempts to automatically find the server settings for the given email address. One part of this mechanism is looking up the email domain’s MX record. We intended for this lookup to support DNSSEC and specifically looked for a library supporting this.

Thanks to a beta tester we learned that DNSSEC signatures were never checked. The solution turned out to be embarrassingly simple: use the library in a way that it actually validates signatures.

Strange error message on OAuth 2.0 failure

A user in our support forum reported a strange error message (“Cannot serialize abstract class com.fsck.k9.mail.oauth.XOAuth2Response”) when using OAuth 2.0 while adding their email account. Our intention was to display the error message returned by the OAuth server. Instead an internal error occurred. 

We tracked this down to the tool optimizing the app by stripping unused code and resources when building the final APK. The optimizer was removing a bit too much. But once the issue was identified, the fix was simple enough.

Crash when downloading an attachment

Shortly after K-9 Mail 6.800 was made available on Google Play, I checked the list of reported app crashes in the developer console. Not a lot of users had gotten the update yet. So there were only very few reports. One was about a crash that occurred when the progress dialog was displayed while downloading an attachment. 

The crash had been reported before. But the number of crashes never crossed the threshold where we consider a crash important enough to actually look at. 

It turned out that the code contained the bug since it was first added in 2017. It was a race condition that was very timing sensitive. And so it worked fine much more often than it did not. 

The fix was simple enough. So now this bug is history.

Don’t write novels in the subject line

The app was crashing when trying to send a message with a very long subject line (around 1000 characters). This, too, wasn’t a new bug. But the crash occurred rarely enough that we didn’t notice it before.

The bug is fixed now. But it’s still best practice to keep the subject short!

Work on K-9 Mail 6.802

Even though we fixed quite a few bugs in K-9 Mail 6.801, there’s still more work to do. Besides fixing a couple of minor issues, K-9 Mail 6.802 will include the following changes.

F-Droid metadata

In preparation of building two apps (Thunderbird for Android and K-9 Mail), we moved the app description and screenshots that are used for F-Droid’s app listing to a new location inside our source code repository. We later found out that this new location is not supported by F-Droid, leading to an empty app description on the F-Droid website and inside their app.

We switched to a different approach and hope this will fix the app description once K-9 Mail 6.802 is released.

Push not working due to missing permission

Fresh installs of the app on Android 14 no longer automatically get the permission to schedule exact alarms. But this permission is necessary for Push to work. This was a known issue. But since it only affects new installs and users can manually grant this permission via Android settings, we decided not to delay the stable release until we added a user interface to guide the user through the permission flow.

K-9 Mail 6.802 will include a first step to improve the user experience. If Push is enabled but the permission to schedule exact alarms hasn’t been granted, the app will change the ongoing Push notification to ask the user to grant this permission.

In a future update we’ll expand on that and ask the user to grant the permission before allowing them to enable Push.

What about new features?

Of course we haven’t forgotten about our roadmap. As mentioned in February’s progress report we’ve started work on switching the user interface to use Material 3 and adding/improving Android 14 compatibility.

There’s not much to show yet. Some Material 3 changes have been merged already. But the user interface in our development version is currently very much in a transitional phase.

The Android 14 compatibility changes will be tested in beta versions first, and then back-ported to K-9 Mail 6.8xx.

Releases

In March 2024 we published the following stable release:

There hasn’t been a release of a new beta version in March.

The post Thunderbird for Android / K-9 Mail: March 2024 Progress Report appeared first on The Thunderbird Blog.

The Mozilla Thunderbird BlogAutomated Testing: How We Catch Thunderbird Bugs Before You Do

Since the release of Thunderbird 115, a big focus has been on improving the state of our automated testing. Automated testing increases the software quality by minimizing the number of bugs accidentally introduced by changes to the code. For each change made to Thunderbird, our testing machines run a set of tests across Windows, macOS, and Linux to detect mistakes and unintended consequences. For a single change (or a group of changes that land at the same time), 60 to 80 hours of machine time is used running tests.

Our code is going to be under more pressure than ever before – with a bigger team making more changes, and monthly releases reducing the time code spends on testing channels before being released.

We want to find the bugs before our users do.

Why We’re Testing

We’re not writing tests merely to make ourselves feel better. Tests improve Thunderbird by:

  • Preventing mistakes
    If we test that some code behaves in an expected way, we’ll find out immediately if it no longer behaves that way. This means a shorter feedback loop, and we can fix the problem before it annoys the users.
  • Finding out when somebody upstream breaks us
    Thunderbird is built from the Firefox code. The Firefox code, which we are not responsible for, is 30 to 40 times the size of the code we are responsible for. When something inevitably changes in Firefox that affects us, we want to know about it immediately so that we can respond.
  • Freeing up human testers
    If we use computers to prove that the program does what it’s supposed to do, particularly if we avoid tedious repetition and difficult-to-set-up tasks, then the limited human resources we have can do more things that humans are better at.
    For example, I’ve recently added tests that check 22 ways to trigger fetching mail, and 10 circumstances fetching mail might not work. There’s no way our human testers (great though they are) are testing all of them, but our automated tests can and do, several times a day.
  • Thinking through what the code should be doing
    Testing forces an engineer to look at the code from a different point-of-view, and this is helpful to think about what the code is supposed to do in more circumstances. It also makes it easier to prove that the code does work in obscure circumstances.
  • Finding existing bugs
    In software terms we’re working with some very old code, and much of it is untested. Testing it puts a fresh set of eyes on the code and reveals some of the mistakes of the past, and where the ravages of time have broken things. It also helps the person writing the tests to understand what the code does, a lot better than just reading the code does.

We’re not trying to completely cover a feature or every edge case in tests. We are trying to create a testing framework around the feature so that when we find a bug, as well as fixing it, we can easily write a test preventing the bug from happening again without being noticed. For too much of the code, this has been impossible without a weeks-long detour into tests.

Breaking New Ground

In the past few months we’ve figured out how to make automated tests for things that were previously impossible:

  • Communication with mail servers using encrypted channels.
  • OAuth2 authentication with mail servers.
  • Communication with web servers where a specific address must be used and an unencrypted channel must not be used.
  • Servers at any given host name or port. Previously, if we wanted to start a server for automated testing, it had to be on the local machine at a non-standard location. Now we can pretend that the server is anywhere, and using standard ports, which is needed for proper testing of account configuration features. (Actually, this was possible before, but now it’s much easier.)

These new abilities are being used to wrap better testing around account set-up features, ahead of the new Account Hub development, so that we can be sure nothing breaks without being noticed. They’re also helping test that collecting mail works when it should, or gives the error prompts we expect when it doesn’t.

Code coverage

We record every line of code that runs during our tests. Collecting all that data tells what code doesn’t run during our tests. If a block of code doesn’t run during any of our tests, nothing will tell us when it breaks until somebody uses the code and complains.

Our code coverage data can be viewed at coverage.thunderbird.net. You can also look at Firefox’s data at coverage.moz.tools.

Looking at the data, you might notice that our overall number is now lower than it was when we started measuring. This doesn’t mean that our testing got worse, it actually shows where we added a lot of code (that isn’t maintained by us) in the third_party directory. For a better reflection of the progress we’ve made, check out the individual directories, especially mail/base which contains the most important user interface code.

  • Just setting up the code coverage tools and looking at the results uncovered several memory leaks. (A memory leak is where memory is allocated for a task and not released when it is no longer needed.) We fixed these leaks and some more that existed in our test code. We now have very low levels of memory leaking in our test runs, so if we make a mistake it is easy to spot.
  • Code coverage data can also point to code that is no longer used. We’ve removed some big chunks of this dead code, which means we’re not wasting time maintaining it.

Mozmill no more

Towards the end of last year we finally retired an old test suite known as Mozmill. Those tests were partially migrated to a different test suite (Mochitest) about four years ago, and things were mostly working fine so it wasn’t a priority to finish. These tests now do things in a more conventional way instead of relying on a bunch of clever but weird tricks.

How much of the code is test code?

About 27%. This is a very rough estimate based on the files in our code repository (minus some third-party directories) and whether they are inside a directory with “test” in the name or not. That’s risen from about 19% in the last five years.

There is no particular goal in mind, but I can imagine a future where there is as much test code as non-test code. If we achieve that, Thunderbird will be in a very healthy place.

A stacked area chart showing the estimated lines of test code (in red) and non-test code (in blue) over time, from January 2019 to January 2024. The chart indicates both types of code increase over this period.

Looking ahead, we’ll be asking contributors to add tests to their patches more often. This obviously depends on the circumstance. But if you’re adding or fixing something, that is the best time to ensure it continues to work in the future. As always, feel free to reach out if you need help writing or running tests, either via Matrix or Topicbox mailing lists:

Geoff Lankow, Staff Engineer

The post Automated Testing: How We Catch Thunderbird Bugs Before You Do appeared first on The Thunderbird Blog.

SUMO BlogKeeping you in the loop: What’s new in our Knowledge Base?

Hello, SUMO community!

We’re setting the stage for something big: a revamp of our style guide designed to make our support content not just user-friendly, but user-delightful. To get a clearer picture of the SUMO user experience, we enlisted the help of an external agency, embarking on a research project designed to peel back the layers of how users interact with our platform. The results were quite revealing. Many users, it turns out, find themselves overwhelmed by the vast amount of information available, often feeling confused and struggling to pinpoint the exact answers they’re searching for. To address this, we’re rolling out targeted improvements focused enhancements to our style guides and contributor resources, aiming to refine how we organize, categorize, and present our support content in SUMO for a smoother, more intuitive user journey.

Have questions or feedback? Drop us a message in this SUMO forum thread.

Refreshing the content taxonomy

A key takeaway from the research was the users’ difficulty in navigating our content categories. This prompted us to rethink our approach to organizing support content, aiming for a better alignment with user needs and industry best practices. This project is in full swing, and we’ll be ready to share more details with you shortly.

Auditing the Firefox content

In our effort to align our content with user needs, we’ve initiated a comprehensive audit of all Firefox support articles. This exhaustive review aims to identify areas where we can reclassify content, eliminate outdated information, and consolidate similar topics. Our goal is to ensure that every piece of information in the KB is relevant, easy to understand, and directly beneficial to our users.

We’re gearing up to share how you can contribute to this exciting initiative. Mark your calendars for the SUMO Community Meeting on Wednesday, April 10, 2024, where we’ll unveil more about this project.

Updating the article types

Using consistent content types for our knowledge base articles has many benefits including ease of navigation and improved clarity and organization, in addition to helping us create content more effectively. We are transitioning to categorizing external knowledge base articles into four types, each serving a specific purpose:

  • About: These articles address “What is…” questions, providing essential information to help readers understand a topic.
  • How-to: These articles focus on answering “How to…?” questions, guiding readers through the steps required to achieve a specific goal or procedure.
  • Troubleshooting: These articles assist users in identifying, diagnosing, and resolving common issues they may encounter with a product, service, or feature by addressing “How to…?” questions related to problem-solving.
  • FAQ: These articles contain concise answers to frequently asked questions on a single topic, which may not fit within other individual KB articles, providing a quick reference for common inquiries.

Stay tuned for additional training and documentation on these article types!

Reducing cognitive load
We believe finding information should not be akin to a mental obstacle course. Focused on minimizing cognitive load, we’ve outlined a series of strategies aimed at guiding users directly to the information they need, no fuss involved. Below are the key strategies we’re implementing:

  • Straight to the point with inline images and icons: We’re transitioning from textual guidance to visual demonstrations. By embedding inline targeted UI captures and icons directly into the article flow, we aim to provide a more visual path for users, minimizing the need for mental translation of text into actions. But, hang on – we haven’t forgotten about making these changes work for everyone. For those using screen readers, we’re counting on you to help us ensure every image and icon comes with comprehensive alt text, making every visual accessible through sound. And on the localization front, your skills are more important than ever. We’re calling on you to assist in capturing and adding alt text to localized images, ensuring it’s accessible and resonant for every member of our global community. For details see Effective use of inline images.
  • Cleaner, more focused images with SUI (simplified user interfaces): To make things even clearer, we’re simplifying our product’s UI in screenshots to just the essentials. This not only makes the images easier to follow but also means they’ll stay accurate longer, even if small UI changes happen. For more info, see Simplifications.
  • Streamlined steps with annotated screenshots: For tasks that necessitate two or more clicks or actions on a single screen, we’re shifting to a more intuitive approach: using screenshots marked with numbered annotations. This strategy will clear away the need for multiple, similar screenshots, making instructions easier to follow while minimizing scrolling.

Keep an eye out for the updated style guides – they’re coming soon!

What this means for you

Our updates will be rolling out from Q2 to Q4 2024, and we’re thrilled to have you on board as we bring these changes to life. The kickoff is just around the corner, so stay tuned for updates! Have thoughts to share or looking to contribute? We’re all ears. Engage with us directly on this SUMO forum thread. Your feedback and involvement are crucial as we progress together.

Thank you for making a difference!

Open Policy & AdvocacyMozilla provides feedback to ACM’s DSA Guidelines

The EU’s Digital Services Act (DSA) has taken effect, ushering in a new era of accountability, transparency, and responsibility for digital platforms. Mozilla has actively supported the DSA –  and its aim to build a safer digital ecosystem – since the legislation was first proposed, and continues to contribute to conversations about how to implement it effectively.

Technology companies that offer services in the EU must “designate a sufficiently mandated legal representative in the Union and provide information relating to their legal representatives to the relevant authorities,” and each EU country must appoint a Digital Services Coordinator to interpret and enforce the DSA.

In January of this year the Authority for Consumers and Markets in the Netherlands (ACM) published draft guidelines for its interpretation and enforcement of the DSA. Mozilla recently provided feedback, focused largely on areas where further detail or clarification would be helpful, as well as discussing challenges small and mid sized platforms may face during implementation.

Specifically, Mozilla recommended the following:

  • Clarification of “ancillary services.”  

The ACM’s draft guidelines note that Recital 13 of the DSA exempts “ancillary services” where, as with the comment section of a newspaper’s website, “the possibility of posting comments… is only an incidental characteristic of the main service.”  Mozilla recommends that this “ancillary services” exception also expressly include services for tech support and product feedback, and similar platforms that exist only to support a primary product that is not subject to DSA. Such forums are clearly ancillary to the main products, as their purpose is to help address bugs and other product-specific issues within those products.

  • Refining the definition of “traders.” 

The DSA imposes additional requirements on platforms that host B2C online marketplaces, by requiring that the platforms track and store data about “traders” that operate on their platform.  DSA Recital 23, which presumes that traders in an online marketplace are offering goods or services for a price, highlights that this provision is intended to cover those platforms that facilitate online commerce. Mozilla recommends that the guidelines make this intent clear, by expressly stating that: (i) “traders” do not include those providing free online services, and (ii) platforms which do not incur profits or facilitate the exchange of money are not B2C online marketplaces.

  • Allowing platforms the flexibility to address spam.

The DSA’s obligations do not apply when platforms act to address “deceptive, high-volume commercial content.” For effective implementation of the guidelines, we believe there needs to be more clarification of how such content is defined. The ACM guidance indicates that the exception applies where someone intentionally manipulates a service through the use of bots, fake accounts, or deceptive practices. Mozilla recommends that the guidance be supplemented to ensure that platforms have the ability to address evolving threats: including clarifying that the references to bots and fake accounts are non-exhaustive examples and not intended to further constrain the spam exception, and establishing a plan to periodically update the guidance to address changing circumstances and developing technologies.

  • Clarifying the Statement of Reasons requirement.

Both the DSA itself and the ACM guidance require platforms to provide a statement of reasons whenever they moderate content or restrict a user account, explaining the legal or contractual provision on which their action was based. Mozilla asked that ACM provide additional details on what such statements should contain; this would provide greater clarity and standardization for platforms and ensure that moderation (particularly of illegal content) remains workable at scale.

  • Allowing platforms flexibility on suspensions.

The ACM guidance allows a platform to permanently suspend users for “manifestly illegal content related to serious crimes.”  However, it requires that a platform always issue a warning, before suspending a user. Mozilla recommends that the ACM expressly confirm platforms have the right to suspend users for violating their Terms of Service, even if their activity is not illegal.  Mozilla also recommends that the warning requirement be clarified, and reduced in cases where having to warn a user might prevent platforms from responding to serious offenses in a timely manner.

As a longtime advocate for the DSA and for platform accountability, Mozilla is enthusiastic about the legislation’s potential to create a safer Internet ecosystem for all. Our comments to ACM, and our ongoing work on this subject, aims to further that goal, without overly burdening small and mid-size platforms.  We look forward to working with the ACM and other European regulators in the coming months, as this legislation continues to take shape.

The post Mozilla provides feedback to ACM’s DSA Guidelines appeared first on Open Policy & Advocacy.

The Mozilla Thunderbird BlogThunderbird Time Machine: Was Thunderbird 3.0 Worth The Wait?

Let’s step back into the Thunderbird Time Machine and teleport ourselves back to December 2009. If you were on the bleeding edge, maybe you were upgrading your computer to the newly released Windows 7 (or checking out Ubuntu 9.10 “Karmic Koala”.) Perhaps you were pouring all your free time into Valve’s ridiculously fun team-based survival shooter Left 4 Dead 2. And maybe, just maybe, you were eagerly anticipating installing Thunderbird 3.0 — especially since it had been a lengthy two years since Thunderbird 2.0 had launched.

What happened during those two years? The Thunderbird developer community — and Mozilla Messaging — clearly stayed busy and productive. Thunderbird 3.0 introduced several new feature milestones!

1) The Email Account Wizard

We take it for granted now, but in the 2000s, adding an account to an email client wasn’t remotely simple. Traditionally you needed to know your IMAP/POP3 and SMTP server URLs, port numbers, and authentication settings. When Thunderbird 3.0 launched, all that was required was your username and password for most mainstream email service providers like Yahoo, Hotmail, or Gmail. Thunderbird went out and detected the rest of the settings for you. Neat!

2) A New Tabbed Interface

With Firefox at its core, Thunderbird followed in the footsteps of most web browsers by offering a tabbed interface. Imagine! Being able to quickly tab between various searches and emails without navigating a chaotic mess of separate windows!

3) A New Add-on Manager

<figcaption class="wp-element-caption">Screenshot from HowToGeek’s Thunderbird 3.0 review.</figcaption>

Speaking of Firefox, Thunderbird quickly adopted the same kind of Add-on Manager that Firefox had recently integrated. No need to fire up a browser to search for useful extensions to Thunderbird — now you could search and install new functionality from right inside Thunderbird itself.

4) Advanced Search Options

Searching your emails got a massive boost in Thunderbird 3.0. Advanced filtering tools means you could filter your results by sender, attachments, people, folders, and more. A shiny new timeline view was also introduced, letting you jump directly to a certain date’s results.

5) The Migration Assistant

Tying this all together was a simple but wonderful migration assistant. It served as a way to introduce users to certain new features (like per-account IMAP synchronization), and visually toggle them on or off (useful for displaying the revised Message Toolbar and giving users a choice of where to enjoy it). To me, this particular addition felt ahead of its time. We’ve been discussing the idea of re-introducing it in a future Thunderbird release, but one of the steep hurdles to doing so now is localization. If it’s something you’d like to see, let us know in the comments.

Try It Out For Yourself

If you want to personally step into the Thunderbird Time Machine, every version ever released for Windows, Linux, and macOS is available in this archive. I ran mine inside of a Windows 7 virtual machine, since my native Linux install complained about missing libraries when trying to get Thunderbird 3.0 running.

Regardless if you’re a new Thunderbird user or a veteran who’s been with us since 2003, thanks for being on the journey with us!

Previous Time Machine Destinations:

The post Thunderbird Time Machine: Was Thunderbird 3.0 Worth The Wait? appeared first on The Thunderbird Blog.

Web Application SecurityRapidly Leveling up Firefox Security

At Mozilla, we believe in an open web that is safe to use. To that end, we improve and maintain the security of people using Firefox around the world. This includes a solid track record of responding to security bugs in the wild, especially with bug bounty programs such as Pwn2Own. As soon as we discover a critical security issue in Firefox, we plan and ship a rapid fix. This post describes how we recently fixed an exploit discovered at Pwn2Own in less than 21 hours, a success only made possible through the collaborative and well-coordinated efforts of a global cross-functional team of release and QA engineers, security experts, and other stakeholders.

A Bit Of Context

Pwn2Own is an annual computer hacking contest where participants aim to find security vulnerabilities in major software such as browsers. Two weeks ago, this event took place in Vancouver, Canada, where participants investigated everything from Chrome, Firefox, and Safari to MS Word and even the code currently running on your car. Without getting into the technical details of the exploit here, this blog post will describe how Mozilla quickly responds to and ships updated builds for exploits found during Pwn2Own.

To give you a sense of scale, Firefox is a massive piece of software: 30 million+ lines of code, six platforms (Windows 32 & 64bit, GNU/Linux 32 & 64bit, Mac OS X and Android), 90 languages, plus installers, updaters, etc. Releasing such a beast involves coordination across many cross-functional teams spanning the entire globe.

The timing of the Pwn2Own event is known weeks beforehand, so Mozilla is always ready when it rolls around! The Firefox train release calendar takes into consideration the timing of Pwn2Own. We try not to ship a new version of Firefox to end users on the release channel on the same day as Pwn2Own to hopefully avoid multiple updates close together. This also means that we are prepared to ship a patched version of Firefox as soon as we know what vulnerabilities were discovered if any at all.

So What Happened?

The specific exploit disclosed at Pwn2Own consisted of two bugs, a necessity when typical web content is rendered inside of a proverbial browser sandbox: These two sophisticated exploits took an admirable amount of effort to reveal and leverage. Nevertheless, as soon as it was discovered, Mozilla engineers got to work, shipping a new release within 21 hours! We certainly weren’t the only browser “pwned”, but we were the first of all, to patch our vulnerability. That’s right: before you knew about this exploit, we had already protected you from it.

As scary as this might sound, Sandbox Escapes, like many web browser exploits, are an issue common to all browsers, thanks to the evolving nature of the internet. Firefox developers are always eager to find and resolve these security issues as quickly as possible to ensure our users stay safe. We do this continuously by shipping new mitigations like win32k lockdown, site isolation, investing in security fuzzing, and promoting bug bounties for similar escapes. In the interest of openness and transparency, we also continuously invite and reward security researchers who share their newest attacks, which helps us keep our product safe even when there isn’t a Pwn2Own to participate in.

Related Resources

If you’re interested in learning more about Mozilla’s security initiatives or Firefox security, here are some resources to help you get started:

Mozilla Security
Mozilla Security Blog
Bug Bounty Program
Mozilla Security playlist on YouTube

Furthermore, if you want to kickstart your own security research in Firefox, we invite you to follow our deeply technical blog at Attack & Defense – Firefox Security Internals for Engineers, Researchers, and Bounty Hunters .

Past Pwn2Own Blog: https://hacks.mozilla.org/2018/03/shipping-a-security-update-of-firefox-in-less-than-a-day/

The post Rapidly Leveling up Firefox Security appeared first on Mozilla Security Blog.

The Mozilla Thunderbird BlogThunderSnap! Why We’re Helping Maintain The Thunderbird Snap On Linux

We love our Linux users across all Linux distributions. That is why we’ve stepped up to help maintain the Thunderbird Snap available in the Snap Store.

Last year we took ownership of the Thunderbird Flatpak, and it has been our officially recommended package for Linux users. However, we are expanding our horizons to make sure the Thunderbird Snap experience is officially supported too. We at Thunderbird are team “free software”, independent of the packaging technology. This will mostly affect our Ubuntu users but there are plenty of other Snap users out there as well. 

Why support both the Snap and Flatpak?

In the spirit of free software, we want to support as many of our users as possible without discriminating on their package preferences. We are not a large company with infinite resources, so we can’t support everything under the sun. But we can make informed decisions that reach the majority of our Linux users.

The Thunderbird Snap has been well maintained by the Ubuntu desktop team for years, and we felt it was time to step up and help out.

What does this mean for me?

If you are an Ubuntu user, then you may already be using the Thunderbird Snap. The next release of Ubuntu is 24.04 (available April 25) and will be the first Ubuntu release that seeds the Thunderbird Snap on the ISO. So if you do a fresh full install of Ubuntu, you will be using the Thunderbird Snap that you know is directly supported by the Thunderbird team.

If you are not an Ubuntu user but Snaps are still a part of your life, then you will still benefit from the same rolling updates provided by the Snap experience.

What changes are expected?

From a user perspective, you should see no changes. Just keep using whichever Thunderbird Snap channel you are comfortable with.

From a developer perspective, we have added the Snap build to our build infrastructure on treeherder. This means whenever a full build is triggered automatically from commits, the Snap is built as well for testing. Whenever the build is one we want to release to the public, this will trigger a general flow:

  1. A version bump is pushed to the existing Thunderbird Snap github repository.
  2. The existing launchpad mirror will pick up this change and automatically build the Snap for x86 and arm64.
  3. If the launchpad Snap build succeeds, the Snap will be uploaded to the designated Snap store channel.

So all we are changing is adding the snap build into the Thunderbird build infrastructure and plugging it into the existing automation that feeds the snap store. 

Where do I report a bug on the Thunderbird Snap?

As with all supported package types of Thunderbird, we would like bugs about the Thunderbird Snap to be reported on bugzilla.mozilla.org under the Thunderbird project.

The post ThunderSnap! Why We’re Helping Maintain The Thunderbird Snap On Linux appeared first on The Thunderbird Blog.

Mozilla Add-ons BlogDeveloper Spotlight: Control Panel for Twitter

You can’t predict how or when success will come. In the case of Control Panel for Twitter — a Firefox extension that gives users authority over the amount of algorithmic content they’re fed — it went viral in Japan a few years ago and word spread fast. One devoted fan even jumped into the open-source code and quickly localized the extension in Japanese, further catapulting its appeal. Today, Control Panel for Twitter has more than 250,000 users from all over the world enjoying it across various browsers.

A comprehensive Options page gives you easy, intuitive control over your Twitter/X experience.

“Most of my extensions are for sites I’m a long-time user of, fixing issues which bug me, and adding missing features,” explains developer Jonny Buchannon. One of the first issues he addressed was designing a feature that moved retweets into a separate tab.

“If you don’t like the algorithmic ‘For you’ timeline, it’s usually because it’s full of random tweets about topics you’re not interested in, or worse, deliberate engagement bait. If you look at all the retweets in your timeline, they tend to have a similar problem,” explains Buchannon. “By default, following someone on Twitter lets them put any tweet in your timeline with no effort — a single click or tap — without having to add their own comment, and sometimes they do that because the tweet in question made them feel strong negative emotions; sometimes people will also retweet a string of tweets about similar topics, filling up your timeline.”

To fix this problem the extension swaps the “For you” timeline for the “Following” (chronological) version. Control Panel for Twitter can also hide other types of Twitter/X content like the “See new Tweets” button, “Who to follow,” “Follow some topics,” all the X Premium upsell prompts, and more.

Even with gobs of current customization features, Buchannon says there’s a “huge backlog” of potential enhancements in their GitHub Issues. New features coming soon include the ability to control what you see in Notifications (like hiding Likes and retweets) and improvements viewing a conversation under a focused tweet.

App-solutely atrocious experience — try Twitter/X on the mobile web!

Control Panel for Twitter is also available on Firefox for Android (addons.mozilla.org [AMO] recently launched an open ecosystem of extensions on Firefox for Android). While it may seem strange to use a mobile browser to access Twitter/X instead of the app, Buchannon says he primarily added mobile support for his own personal use. “I’m the #1 user on that front,” he says before issuing a “warning” to prospective users of his extension on Firefox for Android: “Once you get used to the changes Control Panel for Twitter makes to the experience, default Twitter is unusable — be it the app or the website.”

There are also mobile-specific features, such as changes it brings to Twitter/X search functionality. In standard Twitter/X, when you tap the Search nav you’re brought to the Explore page, which is loaded with algorithmic content. Control Panel for Twitter can hide that so you’re simply presented with a streamlined search field.

Apparently Buchannon isn’t alone in his preference to experience the mobile web version of Twitter/X while using his extension. He claims Control Panel for Twitter has only been available on the App Store for Safari for a little over a year, but already 78% of its Safari users are using it on the iPhone.

Based on the same philosophical functionality as Control Panel for Twitter, Buchannon just released Control Panel for YouTube.

“One of the main focuses of the initial version was improving the Subscription pages by automatically hiding any content you don’t want to see in there like Shorts, live streams, ‘upcoming’ videos you can’t watch now, and hiding videos you’ve already watched, so it acts more like an inbox, where videos disappear as you watch them.”

Sounds great, can’t wait to try it out. Less is often more with social media.

Do you have an intriguing extension development story? Do tell! Maybe your story should appear on this blog. Contact us at amo-featured [at] mozilla [dot] org and let us know a bit about your extension development journey.

The post Developer Spotlight: Control Panel for Twitter appeared first on Mozilla Add-ons Community Blog.

The Mozilla Thunderbird BlogThunderbird Monthly Development Digest: March 2024

Graphic with text "Thunderbird Dev Digest April 2024," featuring abstract ASCII art on a dark Thunderbird logo background.

Hello Thunderbird Community! March is over, which means it’s time for another Development Digest to share the current progress and product direction of Thunderbird development.

Is this your first time reading the Development Digest? Find them all using the Dev Digest tag!

Rust and Exchange

It seems that this section is part of every Development Digest! But that’s the reality of these large efforts, spanning across multiple months with slow but steady progress.

This month we completed initial Exchange Autodiscovery and compatibility with OAuth in our account setup flow, as well as fetching and rendering of all folders. Some areas still need polish and clean up. But work continues towards having things behind a pref in the next beta release. You can follow the progress in this bug.

Meanwhile, here are some goodies to try if you need to parse the Microsoft Exchange Web Services data set and the current crates for serializing and deserializing XML don’t serve you well. https://github.com/thunderbird/xml_struct

List management

Shout out to Magnus for implementing the first step towards a more manageable mailing list subscription flow. An initial implementation of the List Management feature just landed on daily and beta, and it was recently announced in the tb-beta mailing list with a screenshot to show it in action.

It’s currently accessible via a context menu on the List ID. But we’re planning to do some UX and UI explorations to find the best way to expose it without making it annoying.

You can follow the work from this bug.

Esmification completed!

Another big shout out to Magnus for finishing the ESMification effort! As users, you won’t see or notice any difference, but for developers this substantial architectural change saw the removal of all .jsm files in favor of standard JavaScript modules. 

A huge win for a more standardized code base! This allows us to leverage all the nice features of modern JavaScript in Thunderbird development. 

Tiny changes and improvements in Thunderbird development

A lot of nice quality of life improvements tend to happen in small chunks that are not easy to see or spot right away.

Here’s a list of the projects we’re actively working on and will be focusing on for the next month:

  • Cards view UI completion.
  • Fixed missing FindBar in multimessage and browser view.
  • Implementation of a new visual selection paradigm.
  • Improvements to usability and accessibility of the Quick Filter bar.
  • Completion of the email setup in the new Account Hub.
  • Many add-ons API improvements and additions (big shout out to John).
  • Support for viewing nested signed messages and other OpenPGP improvements.

Stay tuned and make sure to sign up to our mailing lists to get detailed updates on all the items in this list, and a lot more.

As usual, if you want to see things as they land you can always check the pushlog and try running daily, which would be immensely helpful for catching bugs early.

See ya next time in our April Development Digest.

Alessandro Castellani (he, him)
Director of Product Engineering

If you’re interested in joining the technical discussion around Thunderbird development, consider joining one or several of our mailing list groups here.

The post Thunderbird Monthly Development Digest: March 2024 appeared first on The Thunderbird Blog.

Open Policy & AdvocacyHow the U.S. Government is leading by example on artificial intelligence

For years, the U.S. government has seen the challenges and opportunities of leveraging AI to advance its mission. Federal agencies have tried to use facial recognition to identify suspects and taxpayers, raising serious concerns about bias and privacy. Some agencies have tried to use AI to identify veterans at higher risk of suicide, where incorrect predictions in either direction can harm veterans’ health and well-being.
 On the flip side, federal agencies are already harnessing AI in promising ways — from making it easier to forecast the weather, to predicting failures of air navigation equipment, to simply automating paperwork. If harnessed well, AI promises to improve the many federal services that Americans rely upon every day.

That’s why we’re thrilled that, today, the White House established a strong policy to empower federal agencies to responsibly harness the power of AI for public benefit. The policy carefully identifies riskier uses of AI and sets up strong guardrails to ensure those applications are responsible. And, the policy simultaneously creates leadership and incentives for agencies to fully leverage the potential of AI.

The policy is rooted in a simple observation: not all applications of AI are equally risky or equally beneficial. For example, it’s far less risky to use AI for digitizing paper documents than to use AI for determining who receives asylum. The former doesn’t need more scrutiny beyond existing rules, but the latter introduces risks to human rights and should be held to a much higher bar.

Diagram explaining how this policy mitigates AI risks.

Diagram explaining how this policy mitigates AI risks.

Hence, the policy takes a risk-based approach to prioritize resources for AI accountability. This approach largely ignores AI applications that are low risk or appropriately managed by other policies, and focuses on AI applications that could meaningfully impact people’s safety or rights. For example, to use AI in electrical grids or autonomous vehicles, it needs to have an impact assessment, real-world testing, independent evaluation, ongoing monitoring, and appropriate public notice and human override. And, to use AI to filter resumes and approve loans, it needs to include the aforementioned protections for safety, mitigate against bias, incorporate public input, conduct ongoing monitoring, and provide reasonable opt-outs. These protections are based on common sense: AI that’s integral to domains like critical infrastructure, public safety, and government benefits should be tested, monitored, and include human overrides. The specifics of these protections are aligned with years of rigorous research and incorporate public comment so that the interventions are more likely to be both effective and feasible.

The policy applies a similar approach to AI innovation. It calls for agencies to create AI strategies with a focus on prioritizing top AI use cases, reducing barriers to AI adoption, setting goals around AI maturity, and building the capacity needed to harness AI in the long run. This, paired with actions in the AI Executive Order that surge AI talent to high-priority locations across the federal government, sets agencies up to better deploy AI where it can be most impactful.

These rules are also coupled with oversight and transparency. Agencies are required to appoint senior Chief AI Officers who oversee both the accountability and innovation mandates in the policy, and agencies also have to publish their plans to comply with these rules and stop using AI that doesn’t. In general, federal agencies also have to report their AI applications in annual AI use case inventories, and provide additional information about how they are managing risks from safety- and rights-impacting AI. The Office of Management and Budget (OMB) will oversee compliance, and that office is required to have sufficient visibility into any exemptions sought by agencies to the AI risk mitigation practices outlined in the policy.

These practices are slated to be highly impactful. Federal law enforcement agencies — including immigration and border enforcement — should now have many of their uses of facial recognition and predictive analytics subject to strong risk mitigation practices. Millions of people work for the U.S. Government, and now these federal workers will have the protections outlined in this policy if their employers try to surveil and manage their movements and behaviors via AI. And, when federal agencies try to use AI to identify fraud in programs such as food stamps and financial aid, those agencies will now have to make sure that the AI actually works and doesn’t discriminate.

These rules also apply regardless of whether a federal agency builds the AI themselves or purchases it from a vendor. That will have a large market-shaping impact, as the U.S. government is the largest purchaser of goods and services in the world, and agencies will now be incentivized to only purchase AI services that comply with the policy. The policy further directs agencies to share their AI code, models, and data — promoting open-source approaches that are vital for the AI ecosystem broadly. Additionally, when procuring AI services, the policy recommends that agencies promote market competition and interoperability among AI vendors, and avoid self-preferential treatment and vendor lock-in. This all helps advance good government, making sure taxpayer dollars are spent on safe and effective AI solutions, not on risky and over-hyped snake oil from contractors.

Now, federal agencies will work to comply with this policy in the coming months. They will also develop follow-up guidance to support the implementation of this policy, advance better procurement of AI, and govern the use of AI in national security applications. The hard work is not over; there are still outstanding questions to tackle as part of this future work, such as figuring out how to embed open source requirements more explicitly as part of the AI procurement process, helping to reduce agencies’ dependencies on specific AI vendors.

Amidst a flurry of government activity on AI, it’s worth stepping back and reflecting: today is a big day for AI policy. The U.S. government is leading by example with its own rules for AI, and Mozilla stands ready to help make the implementation of this policy a success.

The post How the U.S. Government is leading by example on artificial intelligence appeared first on Open Policy & Advocacy.

SeaMonkeySeaMonkey 2.53.18.2 is out!

Hi everyone!

The SeaMonkey Project team is pleased to announce the immediate release of SeaMonkey 2.53.18.2, which is a security release.  Please checkout [1] and/or [2].

Please note that the updates are forthcoming.

:ewong

[1] – https://www.seamonkey-project.org/releases/seamonkey2.53.18.2

[2] – https://www.seamonkey-project.org/releases/2.53.18.2

 

Open Policy & AdvocacyPathways to a fairer digital world: shaping EU rules to increase consumer protection and choice online

In the evolving digital landscape, where every click, swipe, and interaction shapes people’s daily lives, the need for robust consumer protection has never been more paramount. The propagation of deceptive design practices, aggressive personalization, and proliferation of fake reviews have the potential to limit or distort choices online and harm people, particularly the most vulnerable, by tricking them into taking actions that are not in their best interest, causing financial loss, loss of privacy, security, and well-being.

At Mozilla, we are committed to building a healthy Internet – an Internet that respects fundamental rights and constitutes a space where individuals can genuinely exercise their choices. Principles 4 and 5 of our Manifesto state that individuals must have the ability to shape the internet and their own experiences on it, while their security and privacy are fundamental and must not be treated as optional. In today’s interconnected world, these are put at stake.

Voluntary commitments by industry are not sufficient, and legislation can play a crucial role in regulating such practices. Recent years have seen the EU act as a pioneer when it comes to online platform regulation. Updating existing EU consumer protection rules and ensuring strong and coherent enforcement of existing legislation will build on this framework to further protect EU citizens in the digital age.

Below, we summarise our recommendations to EU policymakers ahead of the next European Commission mandate 2024-2029 to build a fairer digital world for users and consumers:

  • Addressing harmful design practices – Harmful design practices in digital experiences – such as those that coerce, manipulate, or deceive consumers – are increasingly compromising user autonomy and reducing choice. They not only thrive at the interface level but also lie deeper in the system’s architecture. We advocate for a clear shift towards ethical digital design through stronger regulation, particularly as technology evolves. This would include stronger enforcement of existing regulations addressing harmful design practices (e.g., GDPR, DSA, DMA). At the same time, the EU should update its consumer protection rules to prohibit milder ‘dark patterns’ and introduce an anti-circumvention clause to ensure that no bypassing of legal requirements by design techniques will be possible.
  • Balancing personalization & privacy online –  Personalization in digital services enhances user interaction but poses significant privacy risks and potential biases, leading to the exposure of sensitive information and societal inequalities. To address these issues, our key recommendations include the adoption of rules that will ensure the enforcement of consumer choices given through consent processes. Such rules should also incentivise the use and uptake of privacy-enhancing technologies through legislation (e.g. Consumer Rights Directive) to strike the right balance between personalization practices and respect of privacy online.
  • Tackling fake reviews – The growing problem of fake reviews on online platforms has the potential to mislead consumers and distort product value. We recommend stronger enforcement of existing rules, meaningful transparency measures, including explicit disclosure requirements for incentivized reviews, increased accountability for consumer-facing online platforms, and consistency across the EU and internationally in review handling to ensure the integrity and trustworthiness of online reviews.
  • Rethinking the ‘average consumer’ – The traditional definition of the ‘average consumer’ in EU consumer law is characterised as “reasonably well informed, observant, and circumspect”. The digital age directly challenges this definition as consumers are increasingly more vulnerable online. Due to the ever-growing information asymmetry between traders and consumers, the yardstick of an ‘average consumer’ does not necessarily reflect existing consumer behaviour. For that reason, we ask for the reevaluation of this concept to reflect today’s reality. Such an update will actively lower the existing threshold and thus increase the overall level of protection and prevent the exploitation of vulnerable groups, especially in personalised commercial practices.

To read our detailed position, click here.

The post Pathways to a fairer digital world: shaping EU rules to increase consumer protection and choice online appeared first on Open Policy & Advocacy.

SUMO BlogIntroducing Konstantina

Hi folks,

I’m super excited to share that Konstantina is joining the Customer Experience team to help with the community in SUMO. Some of you may already know Konstantina because she’s been around in Mozilla for quite a while. She’s transitioning internally from the Community Programs team under Marketing to the Customer Experience team under the Strategy and Operation.

Here’s a bit more about Konstantina in her own words:

Hi everyone, my name is Konstantina and I am very happy I am joining your team! I have been involved with Mozilla since 2011, initially as a volunteer and then as a contractor (since late 2012). During my time here, I have had a lot of roles, from events organizer, community manager to program manager, from working with MDN, Support, Foxfooding, Firefox and many more. I am passionate about communities and how we bring their voices to create great products and I am joining your team to work with Kiki on creating a great community experience. I live in Berlin, Germany with my partner and our cat but I am originally from Athens, Greece. Fun fact about me, I studied geology and I used to do a lot of caving, so I know a lot about ropes and rappelling (though I am a bit rusty now). I also love building legos as you will soon see from my office background. Can’t wait to get to know you all more

Please join me to welcome Konstantina (back) to SUMO!

The Mozilla Thunderbird BlogMarch 2024 Community Office Hours: Open Forum and FAQ

Text "COMMUNITY OFFICE HOURS March 2024: OPEN FORUM and FAQ" with a stylized Thunderbird bird icon in shades of blue and a custom community icon Iin the center on a lavender background with abstract circular design elements.

This month’s topics for our Thunderbird Community Office Hours will be decided by you! We’d like to invite the community to bring their questions, comments, and general conversation to Team Thunderbird for an informal and informational chat. As always, send your questions in advance to officehours@thunderbird.net!

Be sure to note the change in day of the week and time, especially if you’re in Europe and not on summer time yet!

March Office Hours: Open Forum and FAQ

While we love having community office hours with specific topics, from our design process to Add-ons, we want to make time for an open forum, where you bring the topics of discussion. Do you have a great idea for a feature request, or need help filing a bug? Or do you want to know how to use SUMO better, or get some Thunderbird tips? Maybe you want to know more about Team Thunderbird, whether it’s how we got started in open source to how we like our coffee. This is the time to ask these questions and more!

We also just got back from SCaLE21x, and we had so many great questions from people who stopped by the booth. So in addition to answering your questions, whether emailed or live, we’d like to tackle some the things people asked most during our first SCaLE appearance.

Catch Up On Last Month’s Thunderbird Community Office Hours

While you’re thinking of questions to ask, watch last month’s office hours with John Bieling all about Add-on development. We had a fantastic chat about the history, present state, and future of Add-ons, with advice on getting involved in development and support. Watch the video below and read more about our guest at last month’s blog post.

Join The Video Chat

Date and Time: Wednesday, March 27 at 17:00 UTC

Direct URL to Join: https://mozilla.zoom.us/j/95272980798

Meeting ID: 95272980798

Password: 439169

Dial by your location:

  • +1 646 518 9805 US (New York)
  • +1 669 219 2599 US (San Jose)
  • +1 647 558 0588 Canada
  • +33 1 7095 0103 France
  • +49 69 7104 9922 Germany
  • +44 330 088 5830 United Kingdom
  • Find your local number: https://mozilla.zoom.us/u/adkUNXc0FO

The post March 2024 Community Office Hours: Open Forum and FAQ appeared first on The Thunderbird Blog.

Open Policy & AdvocacyMozilla, Center for Democracy and Technology call for openness and transparency in AI

Update | 27 March 2024: Mozilla has submitted its comments to the NTIA’s consultation on openness in AI models referenced in this blog post originally. Drawing on Mozilla’s own history as part of the open source movement, the submission seeks to help guide difficult conversations about openness in AI. First, we shine a light on the different dimensions of openness in AI, including on different components across the AI stack and development lifecycle. Second, we argue that openness in AI can spur competition and help the diffusion of innovation and its benefits more broadly across the economy and society as a whole; that it can advance open science and progress in the entire field of AI; and that it advances accountability and safety by enabling more research and supporting independent scrutiny as well as regulatory oversight. In the past and with a view to recent progress in AI, openness has been a key tenet of U.S. leadership in technology — but ill-conceived policy interventions could jeopardize U.S. leadership in AI. We also recently published the technical and policy readouts from the Columbia Convening on Openness and AI to serve as a resource to the community, both for this consultation and beyond.


Civil society and academics are joining together to defend AI openness and transparency. Mozilla and the Center for Democracy & Technology (CDT), along with members of civil society and academia, have united to underscore the importance of openness and transparency in AI. Nearly 50 signatories sent a letter to Secretary Gina Raimondo in response to the U.S. Commerce Department’s request for comment on openness in AI models.

“We are excited to collaborate with expert individuals and organizations who are committed to seeing more transparent AI innovation,” said Jenn Taylor Hodges, Director of US Public Policy & Government Relations at Mozilla. “Open models in AI will promote trustworthiness and accountability that will better serve society. Mozilla has a long history of promoting open source and fighting corporate consolidation on the Internet. We are bringing those values and experiences to the AI era, making sure that everyone has a say in shaping the future of AI.”

There has been a noticeable shift in the AI landscape toward closed systems, a trend that Mozilla has diligently worked to counter. As detailed in the recently released Accelerating Progress Toward Trustworthy AI report, prominent AI entities are adopting closed systems, prioritizing proprietary control over collaborative openness. These companies have advocated for increased opacity, citing fears of misuse. However, beneath these arguments lies a clear agenda to stifle competition and limit oversight in the AI market.

The joint letter was sent in advance of the Department of Commerce’s comment deadline on AI models which closes March 27. Endorsed by science policy think tanks, advocates against housing discrimination, and computer science luminaries, it argued:

  • Open models have significant benefits to society: They help advance innovation, competition, research, civil and human rights protections, and safety and security.
  • Policy should look at marginal risks of open models compared to closed models: Commerce should look to recent Stanford and Princeton research, which emphasizes limited evidence that open models create new risks not present in closed models.
  • Policy should focus more on AI applications, not models: Where openness makes AI risks worse, policy interventions are more likely to succeed in going after how the AI system is deployed, not by restricting the sharing of information about AI models.
  • Policy should proactively advance openness: Policy on this topic must be developed and vetted by more than just national security agencies, and should promote more R&D into open approaches for AI and better standards for testing and releasing open models.

“The range of participants in this effort – from civil liberties to civil rights organizations, from progressive groups to more market-oriented groups, with advocates for openness in both government and industry, and a broad range of academic experts from law, policy, and computer science – demonstrates how the future of open innovation around powerful AI models is critically important to a wide variety of communities,” said Kevin Bankston, Senior Advisor on AI Governance for CDT. “As our letter highlights, the benefits of open models over closed models for competition, innovation, security and transparency are rather clear, while the risks compared to closed models aren’t. Therefore the White House and Congress should exercise great caution when considering whether and how to regulate the publication of open models.”

Mozilla’s upcoming longer submission to the Commerce Department’s request for comment will include greater details including expanding on Mozilla’s long history of increasing privacy, security, and functionality across the internet through its products, investments, and advocacy. It highlights key findings from the recent Columbia Convening on Openness and AI, and explains how openness is vital to innovation, competition, and accountability – including safety and security, as well as protecting rights and freedoms. It also takes on some of the most prominent arguments driving the push to limit access to AI models, such as claims of “unknown unknown” security risks.

The joint letter and Mozilla’s upcoming response to the call for comments demonstrates how openness can be an enabler of a better future – one where everyone can help build, shape, and test AI so that it works for everyone. That is the future we need, and it’s the one we must keep working toward through policy, technology, and advocacy alike.

The post Mozilla, Center for Democracy and Technology call for openness and transparency in AI appeared first on Open Policy & Advocacy.

Mozilla Add-ons BlogManifest V3 & Manifest V2 (March 2024 update)

Calling all extension developers! With Manifest V3 picking up steam again, we wanted to provide some visibility into our current plans as a lot has happened since we published our last update.

Back in 2022 we released our initial implementation of MV3, the latest version of the extensions platform, in Firefox. Since then, we have been hard at work collaborating with other browser vendors and community members in the W3C WebExtensions Community Group (WECG). Our shared goals were to improve extension APIs while addressing cross browser compatibility. That collaboration has yielded some great results to date and we’re proud to say our participation has been instrumental in shaping and designing those APIs to ensure broader applicability across browsers.

We continue to support DOM-based background scripts in the form of Event pages, and the blocking webRequest feature, as explained in our previous blog post. Chrome’s version of MV3 requires service worker-based background scripts, which we do not support yet. However, an extension can specify both and have it work in Chrome 121+ and Firefox 121+. Support for Event pages, along with support for blocking webRequest, is a divergence from Chrome that enables use cases that are not covered by Chrome’s MV3 implementation.

Well what’s happening with MV2 you ask? Great question – in case you missed it, Google announced late last year their plans to resume their MV2 deprecation schedule. Firefox, however, has no plans to deprecate MV2 and will continue to support MV2 extensions for the foreseeable future. And even if we re-evaluate this decision at some point down the road, we anticipate providing a notice of at least 12 months for developers to adjust accordingly and not feel rushed.

As our plans solidify, future updates around our MV3 efforts will be shared via this blog. We are loosely targeting our next update after the conclusion of the upcoming WECG meeting at the Apple offices in San Diego. For more information on adopting MV3, please refer to our migration guide. Another great resource worth checking out is the recent FOSDEM presentation a couple team members delivered, Firefox, Android, and Cross-browser WebExtensions in 2024.

If you have questions, concerns or feedback on Manifest V3 we would love to hear from you in the comments section below or if you prefer, drop us an email.

The post Manifest V3 & Manifest V2 (March 2024 update) appeared first on Mozilla Add-ons Community Blog.

Open Policy & AdvocacyMozilla joins allies to co-sign an amicus brief in State of Nevada vs. Meta Platforms defending end-to-end encryption

Mozilla recently signed onto an amicus brief – alongside the Electronic Frontier Foundation , the Internet Society, Signal, and a broad coalition of other allies – on the Nevada Attorney General’s recent attempt to limit encryption. The amicus brief signals a collective commitment from these organizations on the importance of encryption in safeguarding digital privacy and security as fundamental rights.

The core of this dispute is the Nevada Attorney General’s proposition to limit the application of end-to-end encryption (E2EE) for children’s online communications. It is a move that ostensibly aims to aid law enforcement but, in practice, could significantly weaken the privacy and security of all internet users, including children. Nevada argues that end-to-end encryption might impede some criminal investigations. However, as the amicus brief explains, encryption does not prevent either the sender or recipient from reporting concerning content to police, nor does it prevent police from accessing other metadata about communications via lawful requests. Blocking the rollout of end-to-end encryption would undermine privacy and security for everyone for a marginal benefit that would be far outweighed by the harms such a draconian limitation could create.

The case, set for a hearing in Clark County, Nevada, encapsulates a broader debate on the balance between enabling law enforcement to combat online crimes and preserving robust online protections for all users – especially vulnerable populations like children. Mozilla’s involvement in this amicus brief is founded on its long standing belief that encryption is an essential component of its core Manifesto tenet – privacy and security are fundamental online and should not be treated as optional.

The post Mozilla joins allies to co-sign an amicus brief in State of Nevada vs. Meta Platforms defending end-to-end encryption appeared first on Open Policy & Advocacy.

Open Policy & AdvocacyMozilla Joins Amicus Brief Supporting Software Interoperability

In modern technology, interoperability between programs is crucial to the usability of applications, user choice, and healthy competition. Today Mozilla has joined an amicus brief at the Ninth Circuit, to ensure that copyright law does not undermine the ability of developers to build interoperable software.

This amicus brief comes in the latest appeal in a multi-year courtroom saga between Oracle and Rimini Street. The sprawling litigation has lasted more than a decade and has already been up to the Supreme Court on a procedural question about court costs. Our amicus brief addresses a single issue: should the fact that a software program is built to be interoperable with another program be treated, on its own, as establishing copyright infringement?

We believe that most software developers would answer this question with: “Of course not!” But the district court found otherwise. The lower court concluded that even if Rimini’s software does not include any Oracle code, Rimini’s programs could be infringing derivative works simply “because they do not work with any other programs.” This is a mistake.

The classic example of a derivative work is something like a sequel to a book or movie. For example, The Empire Strikes Back is a derivative work of the original Star Wars movie. Our amicus brief explains that it makes no sense to apply this concept to software that is built to interoperate with another program. Not only that, interoperability of software promotes competition and user choice. It should be celebrated, not punished.

This case raises similar themes to another high profile software copyright case, Google v. Oracle, which considered whether it was copyright infringement to re-implement an API. Mozilla submitted an amicus brief there also, where we argued that copyright law should support interoperability. Fortunately, the Supreme Court reached the right conclusion and ruled that re-implementing an API was fair use. That ruling and other important fair use decisions would be undermined if a copyright plaintiff could use interoperability as evidence that software is an infringing derivative work.

In today’s brief Mozilla joins a broad coalition of advocates for openness and competition, including the Electronic Frontier Foundation, Creative Commons, Public Knowledge, iFixit, and the Digital Right to Repair Coalition. We hope the Ninth Circuit will fix the lower court’s mistake and hold that interoperability is not evidence of infringement.

The post Mozilla Joins Amicus Brief Supporting Software Interoperability appeared first on Open Policy & Advocacy.

hacks.mozilla.orgImproving Performance in Firefox and Across the Web with Speedometer 3

In collaboration with the other major browser engine developers, Mozilla is thrilled to announce Speedometer 3 today. Like previous versions of Speedometer, this benchmark measures what we think matters most for performance online: responsiveness. But today’s release is more open and more challenging than before, and is the best tool for driving browser performance improvements that we’ve ever seen.

This fulfills the vision set out in December 2022 to bring experts across the industry together in order to rethink how we measure browser performance, guided by a shared goal to reflect the real-world Web as much as possible. This is the first time the Speedometer benchmark, or any major browser benchmark, has been developed through a cross-industry collaboration supported by each major browser engine: Blink, Gecko, and WebKit. Working together means we can build a shared understanding of what matters to optimize, and facilitates broad review of the benchmark itself: both of which make it a stronger lever for improving the Web as a whole.

And we’re seeing results: Firefox got faster for real users in 2023 as a direct result of optimizing for Speedometer 3. This took a coordinated effort from many teams: understanding real-world websites, building new tools to drive optimizations, and making a huge number of improvements inside Gecko to make web pages run more smoothly for Firefox users. In the process, we’ve shipped hundreds of bug fixes across JS, DOM, Layout, CSS, Graphics, frontend, memory allocation, profile-guided optimization, and more.

We’re happy to see core optimizations in all the major browser engines turning into improved responsiveness for real users, and are looking forward to continuing to work together to build performance tests that improve the Web.

The post Improving Performance in Firefox and Across the Web with Speedometer 3 appeared first on Mozilla Hacks - the Web developer blog.

Open Policy & AdvocacyMozilla Mornings: Choice or Illusion? Tackling Harmful Design Practices

The first edition of Mozilla Mornings in 2024 will explore the impact of harmful design on consumers in the digital world and the role regulation can play in addressing such practices.

In the evolving digital landscape, deceptive and manipulative design practices, as well as aggressive personalisation and profiling pose significant threats to consumer welfare, potentially leading to financial loss, privacy breaches, and compromised security.

While existing EU regulations address some aspects of these issues, questions persist about their adequacy in combating harmful design patterns comprehensively. What additional measures are needed to ensure digital fairness for consumers and empower designers who want to act ethically?

To discuss these issues, we are delighted to announce that the following speakers will be participating in our panel discussion:

  • Egelyn Braun, Team Leader DG JUST, European Commission
  • Estelle Hary, Co-founder, Design Friction
  • Silvia de Conca, Amsterdam Law & Technology Institute, Vrije Universiteit Amsterdam
  • Finn Myrstad, Digital Policy Director, Norwegian Consumer Council

The event will also feature a fireside chat with MEP Kim van Sparrentak from Greens/EFA.

  • Date: Wednesday 20th March 2024
  • Location: L42, Rue de la Loi 42, 1000 Brussels
  • Time: 08:30 – 10:30 CET

To register, click here.

The post Mozilla Mornings: Choice or Illusion? Tackling Harmful Design Practices appeared first on Open Policy & Advocacy.

Mozilla L10NA Deep Dive Into the Evolution of Pretranslation in Pontoon

Quite often, an imperfect translation is better than no translation. So why even publish untranslated content when high-quality machine translation systems are fast and affordable? Why not immediately machine-translate content and progressively ship enhancements as they are submitted by human translators?

At Mozilla, we call this process pretranslation. We began implementing it in Pontoon before COVID-19 hit, thanks to Vishal who landed the first patches. Then we caught some headwinds and didn’t make much progress until 2022 after receiving a significant development boost and finally launched it for the general audience in September 2023.

So far, 20 of our localization teams (locales) have opted to use pretranslation across 15 different localization projects. Over 20,000 pretranslations have been submitted and none of the teams have opted out of using it. These efforts have resulted in a higher translation completion rate, which was one of our main goals.

In this article, we’ll take a look at how we developed pretranslation in Pontoon. Let’s start by exploring how it actually works.

How does pretranslation work?

Pretranslation is enabled upon a team’s request (it’s off by default). When a new string is added to a project, it gets automatically pretranslated using a 100% match from translation memory (TM), which also includes translations of glossary entries. If a perfect match doesn’t exist, a locale-specific machine translation (MT) engine is used, trained on the locale’s translation memory.

Pretranslation opt-in form

Pretranslation opt-in form.

After pretranslations are retrieved and saved in Pontoon, they get synced to our primary localization storage (usually a GitHub repository) and hence immediately made available for shipping. Unless they fail our quality checks. In that case, they don’t propagate to repositories until errors or warnings are fixed during the review process.

Until reviewed, pretranslations are visually distinguishable from user-submitted suggestions and translations. This makes post-editing much easier and more efficient. Another key factor that influences pretranslation review time is, of course, the quality of pretranslations. So let’s see how we picked our machine translation provider.

Choosing a machine translation engine

We selected the machine translation provider based on two primary factors: quality of translations and the number of supported locales. To make translations match the required terminology and style as much as possible, we were also looking for the ability to fine-tune the MT engine by training it on our translation data.

In March 2022, we compared Bergamot, Google’s Cloud Translation API (generic), and Google’s AutoML Translation (with custom models). Using these services we translated a collection of 1,000 strings into 5 locales (it, de, es-ES, ru, pt-BR), and used automated scores (BLEU, chrF++) as well as manual evaluation to compare them with the actual translations.

Performance of tested MT engines for Italian (it).

Performance of tested MT engines for Italian (it).

Google’s AutoML Translation outperformed the other two candidates in virtually all tested scenarios and metrics, so it became the clear choice. It supports over 60 locales. Google’s Generic Translation API supports twice as many, but we currently don’t plan to use it for pretranslation in locales not supported by Google’s AutoML Translation.

Making machine translation actually work

Currently, around 50% of pretranslations generated by Google’s AutoML Translation get approved without any changes. For some locales, the rate is around 70%. Keep in mind however that machine translation is only used when a perfect translation memory match isn’t available. For pretranslations coming from translation memory, the approval rate is 90%.

Comparison of pretranslation approval rate between teams.

Comparison of pretranslation approval rate between teams.

To reach that approval rate, we had to make a series of adjustments to the way we use machine translation.

For example, we convert multiline messages to single-line messages before machine-translating them. Otherwise, each line is treated as a separate message and the resulting translation is of poor quality.

Multiline message:

Make this password unique and different from any others you use.
A good strategy to follow is to combine two or more unrelated
words to create an entire pass phrase, and include numbers and symbols.

Multiline message converted to a single-line message:

Make this password unique and different from any others you use. A good strategy to follow is to combine two or more unrelated words to create an entire pass phrase, and include numbers and symbols.

Let’s take a closer look at two of the more time-consuming changes.

The first one is specific to our machine translation provider (Google’s AutoML Translation). During initial testing, we noticed it would often take a long time for the MT engine to return results, up to a minute. Sometimes it even timed out! Such a long response time not only slows down pretranslation, it also makes machine translation suggestions in the translation editor less useful – by the time they appear, the localizer has already moved to translate the next string.

After further testing, we began to suspect that our custom engine shuts down after a period of inactivity, thus requiring a cold start for the next request. We contacted support and our assumption was confirmed. To overcome the problem, we were advised to send a dummy query to the service every 60 seconds just to keep the system alive.

Giphy: Oh No Wow GIF by Little Princess Ember

Image source: Giphy.

Of course, it’s reasonable to shut down inactive services to free up resources, but the way to keep them alive isn’t. We have to make (paid) requests to each locale’s machine translation engines every minute just to make sure they work when we need them. And sometimes even that doesn’t help – we still see about a dozen ServiceUnavailable errors every day. It would be so much easier if we could just customize the default inactivity period or pay extra for an always-on service.

The other issue we had to address is quite common in machine translation systems: they are not particularly good at preserving placeholders. In particular, extra space often gets added to variables or markup elements, resulting in broken translations.

Message with variables:

{ $partialSize } of { $totalSize }

Message with variables machine-translated to Slovenian (adding space after $ breaks the variable):

{$ partialSize} od {$ totalSize}

We tried to mitigate this issue by wrapping placeholders in <span translate=”no”>…</span>, which tells Google’s AutoML Translation to not translate the wrapped text. This approach requires the source text to be submitted as HTML (rather than plain text), which triggers a whole new set of issues — from adding spaces in other places to escaping quotes — and we couldn’t circumvent those either. So this was a dead-end.

The solution was to store every placeholder in the Glossary with the same value for both source string and translation. That approach worked much better and we still use it today. It’s not perfect, though, so we only use it to pretranslate strings for which the default (non-glossary) machine translation output fails our placeholder quality checks.

Making pretranslation work with Fluent messages

On top of the machine translation service improvements we also had to account for the complexity of Fluent messages, which are used by most of the projects we localize at Mozilla. Fluent is capable of expressing virtually any imaginable message, which means it is the localization system you want to use if you want your software translations to sound natural.

As a consequence, Fluent message format comes with a syntax that allows for expressing such complex messages. And since machine translation systems (as seen above) already have trouble with simple variables and markup elements, their struggles multiply with messages like this:

shared-photos =
 { $photoCount ->
    [one]
      { $userGender ->
        [male] { $userName } added a new photo to his stream.
        [female] { $userName } added a new photo to her stream.
       *[other] { $userName } added a new photo to their stream.
      }
   *[other]
      { $userGender ->
        [male] { $userName } added { $photoCount } new photos to his stream.
        [female] { $userName } added { $photoCount } new photos to her stream.
       *[other] { $userName } added { $photoCount } new photos to their stream.
      }
  }

That means Fluent messages need to be pre-processed before they are sent to the pretranslation systems. Only relevant parts of the message need to be pretranslated, while syntax elements need to remain untouched. In the example above, we extract the following message parts, pretranslate them, and replace them with pretranslations in the original message:

  • { $userName } added a new photo to his stream.
  • { $userName } added a new photo to her stream.
  • { $userName } added a new photo to their stream.
  • { $userName } added { $photoCount } new photos to his stream.
  • { $userName } added { $photoCount } new photos to her stream.
  • { $userName } added { $photoCount } new photos to their stream.

To be more accurate, this is what happens for languages like German, which uses the same CLDR plural forms as English. For locales without plurals, like Chinese, we drop plural forms completely and only pretranslate the remaining three parts. If the target language is Slovenian, two additional plural forms need to be added (two, few), which in this example results in a total of 12 messages needing pretranslation (four plural forms, with three gender forms each).

Finally, Pontoon translation editor uses custom UI for translating access keys. That means it’s capable of detecting which part of the message is an access key and which is a label the access key belongs to. The access key should ideally be one of the characters included in the label, so the editor generates a list of candidates that translators can choose from. In pretranslation, the first candidate is directly used as an access key, so no TM or MT is involved.

A screenshot of Notepad showing access keys in the menu.

Access keys (not to be confused with shortcut keys) are used for accessibility to interact with all controls or menu items using the keyboard. Windows indicates access keys by underlining the access key assignment when the Alt key is pressed. Source: Microsoft Learn.

Looking ahead

With every enhancement we shipped, the case for publishing untranslated text instead of pretranslations became weaker and weaker. And there’s still room for improvements in our pretranslation system.

Ayanaa has done extensive research on the impact of Large Language Models (LLMs) on translation efficiency. She’s now working on integrating LLM-assisted translations into Pontoon’s Machinery panel, from which localizers will be able to request alternative translations, including formal and informal options.

If the target locale could set the tone to formal or informal on the project level, we could benefit from this capability in pretranslation as well. We might also improve the quality of machine translation suggestions by providing existing translations into other locales as references in addition to the source string.

If you are interested in using pretranslation or already use it, we’d love to hear your thoughts! Please leave a comment, reach out to us on Matrix, or file an issue.

Mozilla L10NL10n Report: February 2024 Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 

New content and projects

What’s new or coming up in Firefox desktop

While the amount of content has been relatively small over the last few months in Firefox, there have been some UI changes and updates to privacy setting related text such as form autofill, Cookie Banner Blocker, passwords (about:logins), and cookie and site data*. One change happening here (and across all Mozilla products) is the move away from using the term “login” to describe the credentials for accessing websites and instead use “password(s).”

In addition, while the number of strings is low, Firefox’s PDF viewer will soon have the ability to highlight content. You can test this feature now in Nightly.

Most of these strings and translations can be previewed by checking a Nightly build. If you’re new to localizing Firefox or if you missed our deep dive, please check out our blog post from July to learn more about the Firefox release schedule.

*Recently in our L10N community matrix channel, someone from our community asked how the new strings for clearing browsing history and data (see screenshot below) from Cookie and Site Data could be shown in Nightly.

Pontoon screenshot showing the strings for clearing browsing history and data from Cookie and Site Data.In order to show the strings in Nightly, the privacy.sanitize.useOldClearHistoryDialog preference needs to be set to false. To set the preference, type about:config in your URL bar and press enter. A warning may pop up warning you to proceed with caution, click the button to continue. On the page that follows, paste privacy.sanitize.useOldClearHistoryDialog into the search field, then click the toggle button to change the value to false.

You can then trigger the new dialog by clicking “Clear Data…” from the Cookies and Site Data setting or “Clear History…” from the History. (You may need to quit Firefox and open it again for the change to take effect.).

In case of doubts about managing about:config, you can consult the Configuration Editor guide on SUMO.

What’s new or coming up in mobile

Much like desktop, mobile land has been pretty calm recently.

Having said that, we would like to call out the new Translation feature that is now available to test on the latest Firefox for Android v124 Nightly builds (this is possible only through the secret settings at the moment). It’s a built-in full page translation feature that allows you to seamlessly browse the web in your preferred language. As you navigate the site, Firefox continuously translates new content.

Check your Pontoon notifications for instructions on how to test it out. Note that the feature is not available on iOS at the moment.

In the past couple of months you may have also noticed strings mentioning a new shopping feature called “Review Checker” (that we mentioned for desktop in our November edition). The feature is still a bit tricky to test on Android, but there are instructions you can follow – these can also be found in your Pontoon notification archive.

For testing on iOS, you just need to have the latest Beta version installed and navigate to the product pages on the US sites of amazon.com, bestbuy.com, and walmart.com. A logo in the URL bar will appear with a notification, to launch and test the feature.

Finally, another notable change that has been called out under the Firefox desktop section above: we are moving away from using the term “login” to describe the credentials for accessing websites and instead use “password(s).”

What’s new or coming up in Foundation projects

New languages have been added to Common Voice in 2023: Tibetan, Chichewa, Ossetian, Emakhuwa, Laz, Pular Guinée, Sindhi. Welcome!

What’s new or coming up in Pontoon

Improved support for mobile devices

Pontoon translation workspace is now responsive, which means you can finally use Pontoon on your mobile device to translate and review strings! We developed a single-column layout for mobile phones and 2-column layout for tablets.

Screenshot of Pontoon UI on a smartphone running Firefox for Android

Screenshot of Pontoon UI on a smartphone running Firefox for Android

2024 Pontoon survey

Thanks again to everyone who has participated in the 2024 Pontoon survey. The 3 top-voted features we commit to implement are:

  1. Add ability to edit Translation Memory entries (611 votes).
  2. Improve performance of Pontoon translation workspace and dashboards (603 votes).
  3. Add ability to propose new Terminology entries (595 votes).

Friends of the Lion

We started a series called “Localizer Spotlight” and have published two already. Do you know someone who should be featured there? Let us know here!

Also, do someone in your l10n community who’s been doing a great job and should appear in this section? Contact us and we’ll make sure they get a shout-out!

Useful Links

Questions? Want to get involved?

If you want to get involved, or have any question about l10n, reach out to:

Did you enjoy reading this report? Let us know how we can improve it.

hacks.mozilla.orgAnnouncing Interop 2024

The Interop Project has become one of the key ways that browser vendors come together to improve the web platform. By working to identify and improve key areas where differences between browser engines are impacting users and web developers, Interop is a critical tool in ensuring the long-term health of the open web.

The web platform is built on interoperability based on common standards. This offers users a degree of choice and control that sets the web apart from proprietary platforms defined by a single implementation. A commitment to ensuring that the web remains open and interoperable forms a fundamental part of Mozilla’s manifesto and web vision, and is why we’re so committed to shipping Firefox with our own Gecko engine.

However interoperability requires care and attention to maintain. When implementations ship with differences between the standard and each other, this creates a pain point for web authors; they have to choose between avoiding the problematic feature entirely and coding to specific implementation quirks. Over time if enough authors produce implementation-specific content then interoperability is lost, and along with it user agency.

This is the problem that the Interop Project is designed to address. By bringing browser vendors together to focus on interoperability, the project allows identifying areas where interoperability issues are causing problems, or may do in the near future. Tracking progress on those issues with a public metric provides accountability to the broader web community on addressing the problems.

The project works by identifying a set of high-priority focus areas: parts of the web platform where everyone agrees that making interoperability improvements will be of high value. These can be existing features where we know browsers have slightly different behaviors that are causing problems for authors, or they can be new features which web developer feedback shows is in high demand and which we want to launch across multiple implementations with high interoperability from the start. For each focus area a set of web-platform-tests is selected to cover that area, and the score is computed from the pass rate of these tests.

Interop 2023

The Interop 2023 project covered high profile features like the new :has() selector, and web-codecs, as well as areas of historically poor interoperability such as pointer events.

The results of the project speak for themselves: every browser ended the year with scores in excess of 97% for the prerelease versions of their browsers. Moreover, the overall Interoperability score — that is the fraction of focus area tests that pass in all participating browser engines — increased from 59% at the start of the year to 95% now. This result represents a huge improvement in the consistency and reliability of the web platform. For users this will result in a more seamless experience, with sites behaving reliably in whichever browser they prefer.

For the :has() selector — which we know from author feedback has been one of the most in-demand CSS features for a long time — every implementation is now passing 100% of the web-platform-tests selected for the focus area. Launching a major new platform feature with this level of interoperability demonstrates the power of the Interop project to progress the platform without compromising on implementation diversity, developer experience, or user choice.

As well as focus areas, the Interop project also has “investigations”. These are areas where we know that we need to improve interoperability, but aren’t at the stage of having specific tests which can be used to measure that improvement. In 2023 we had two investigations. The first was for accessibility, which covered writing many more tests for ARIA computed role and accessible name, and ensuring they could be run in different browsers. The second was for mobile testing, which has resulted in both Mobile Firefox and Chrome for Android having their initial results in wpt.fyi.

Interop 2024

Following the success of Interop 2023, we are pleased to confirm that the project will continue in 2024 with a new selection of focus areas, representing areas of the web platform where we think we can have the biggest positive impact on users and web developers.

New Focus Areas

New focus areas for 2024 include, among other things:

  • Popover API – This provides a declarative mechanism to create content that always renders in the topmost-layer, so that it overlays other web page content. This can be useful for building features like tooltips and notifications. Support for popover was the #1 author request in the recent State of HTML survey.
  • CSS Nesting – This is a feature that’s already shipping, which allows writing more compact and readable CSS files, without the need for external tooling such as preprocessors. However different browsers shipped slightly different behavior based on different revisions of the spec, and Interop will help ensure that everyone aligns on a single, reliable, syntax for this popular feature.
  • Accessibility – Ensuring that the web is accessible to all users is a critical part of Mozilla’s manifesto. Our ability to include Accessibility testing in Interop 2024 is a direct result of the success of the Interop 2023 Accessibility Investigation in increasing the test coverage of key accessibility features.

The full list of focus areas is available in the project README.

Carryover

In addition to the new focus areas, we will carry over some of the 2023 focus areas where there’s still more work to be done. Of particular interest is the Layout focus area, which will combine the previous Flexbox, Grid and Subgrid focus area into one area covering all the most important layout primitives for the modern web. On top of that the Custom Properties, URL and Mouse and Pointer Events focus areas will be carried over. These represent cases where, even though we’ve already seen large improvements in Interoperability, we believe that users and web authors will benefit from even greater convergence between implementations.

Investigations

As well as focus areas, Interop 2024 will also feature a new investigation into improving the integration of WebAssembly testing into web-platform-tests. This will open up the possibility of including WASM features in future Interop projects. In addition we will extend the Accessibility and Mobile Testing investigations, as there is more work to be done to make those aspects of the platform fully testable across different implementations.

Partner Announcements

The post Announcing Interop 2024 appeared first on Mozilla Hacks - the Web developer blog.

hacks.mozilla.orgOption Soup: the subtle pitfalls of combining compiler flags

Firefox development uncovers many cross-platform differences and unique features of its combination of dependencies. Engineers working on Firefox regularly overcome these challenges and while we can’t detail all of them, we think you’ll enjoy hearing about some so here’s a sample of a recent technical investigation.

During the Firefox 120 beta cycle, a new crash signature appeared on our radars with significant volume.

At that time, the distribution across operating systems revealed that more than 50% of the crash volume originates from Ubuntu 18.04 LTS users.

The main process crashes in a CanvasRenderer thread, with the following call stack:

0  firefox  std::locale::operator=  
1  firefox  std::ios_base::imbue  
2  firefox  std::basic_ios<char, std::char_traits<char> >::imbue  
3  libxul.so  sh::InitializeStream<std::__cxx11::basic_ostringstream<char, std::char_traits<char>, std::allocator<char> > >  /build/firefox-ZwAdKm/firefox-120.0~b2+build1/gfx/angle/checkout/src/compiler/translator/Common.h:238
3  libxul.so  sh::TCompiler::setResourceString  /build/firefox-ZwAdKm/firefox-120.0~b2+build1/gfx/angle/checkout/src/compiler/translator/Compiler.cpp:1294
4  libxul.so  sh::TCompiler::Init  /build/firefox-ZwAdKm/firefox-120.0~b2+build1/gfx/angle/checkout/src/compiler/translator/Compiler.cpp:407
5  libxul.so  sh::ConstructCompiler  /build/firefox-ZwAdKm/firefox-120.0~b2+build1/gfx/angle/checkout/src/compiler/translator/ShaderLang.cpp:368
6  libxul.so  mozilla::webgl::ShaderValidator::Create  /build/firefox-ZwAdKm/firefox-120.0~b2+build1/dom/canvas/WebGLShaderValidator.cpp:215
6  libxul.so  mozilla::WebGLContext::CreateShaderValidator const  /build/firefox-ZwAdKm/firefox-120.0~b2+build1/dom/canvas/WebGLShaderValidator.cpp:196
7  libxul.so  mozilla::WebGLShader::CompileShader  /build/firefox-ZwAdKm/firefox-120.0~b2+build1/dom/canvas/WebGLShader.cpp:98

At first glance, we want to blame WebGL. The C++ standard library functions cannot be at fault, right?

But when looking at the WebGL code, the crash occurs in the perfectly valid lines of C++ summarized below:

std::ostringstream stream;
stream.imbue(std::locale::classic());

This code should never crash, and yet it does. In fact, taking a closer look at the stack gives a first lead for investigation:
Although we crash into functions that belong to the C++ standard library, these functions appear to live in the firefox binary.

This is an unusual situation that never occurs with official builds of Firefox.
It is however very common for distribution to change the configuration settings and apply downstream patches to an upstream source, no worries about that.
Moreover, there is only a single build of Firefox Beta that is causing this crash.

We know this thanks to a unique identifier associated with any ELF binary.
Here, if we choose any specific version of Firefox 120 Beta (such as 120b9), the crashes all embed the same unique identifier for firefox.

Now, how can we guess what build produces this weird binary?

A useful user comment mentions that they regularly experience this crash since updating to 120.0~b2+build1-0ubuntu0.18.04.1.
And by looking for this build identifier, we quickly reach the Firefox Beta PPA.
Then indeed, we are able to reproduce the crash by installing it in a Ubuntu 18.04 LTS virtual machine: it occurs when loading any WebGL page!
With the binary now at hand, running nm -D ./firefox confirms the presence of several symbols related to libstdc++ that live in the text section (T marker).

Templated and inline symbols from libstdc++ usually appear as weak (W marker), so there is only one explanation for this situation: firefox has been statically linked with libstdc++, probably through -static-libstdc++.

Fortunately, the build logs are available for all Ubuntu packages.
After some digging, we find the logs for the 120b9 build, which indeed contain references to -static-libstdc++.

But why?

Again, everything is well documented, and thanks to well trained digging skills we reach a bug report that provides interesting insights.
Firefox requires a modern C++ compiler, and hence a modern libstdc++, which is unavailable on old systems like Ubuntu 18.04 LTS.
The build uses -static-libstdc++ to close this gap.
This just explains the weird setup though.

What about the crash?

Since we can now reproduce it, we can launch Firefox in a debugger and continue our investigation.
When inspecting the crash site, we seem to crash because std::locale::classic() is not properly initialized.
Let’s take a peek at the implementation.

const locale& locale::classic()
{
  _S_initialize();
  return *(const locale*)c_locale;
}

_S_initialize() is in charge of making sure that c_locale will be properly initialized before we return a reference to it.
To achieve this, _S_initialize() calls another function, _S_initialize_once().

void locale::_S_initialize()
{
#ifdef __GTHREADS
  if (!__gnu_cxx::__is_single_threaded())
    __gthread_once(&_S_once, _S_initialize_once);
#endif

  if (__builtin_expect(!_S_classic, 0))
    _S_initialize_once();
}

In _S_initialize(), we first go through a wrapper for pthread_once(): the first thread that reaches this code consumes _S_once and calls _S_initialize_once(), whereas other threads (if any) are stuck waiting for _S_initialize_once() to complete.

This looks rather fail-proof, right?

There is even an extra direct call to _S_initialize_once() if _S_classic is still uninitialized after that.
Now, _S_initialize_once() itself is rather straightforward: it allocates _S_classic and puts it within c_locale.

void
locale::_S_initialize_once() throw()
{
  // Need to check this because we could get called once from _S_initialize()
  // when the program is single-threaded, and then again (via __gthread_once)
  // when it's multi-threaded.
  if (_S_classic)
    return;

  // 2 references.
  // One reference for _S_classic, one for _S_global
  _S_classic = new (&c_locale_impl) _Impl(2);
  _S_global = _S_classic;
  new (&c_locale) locale(_S_classic);
}

The crash looks as if we never went through _S_initialize_once(), so let’s put a breakpoint there and see what happens.
And just by doing this, we already notice something suspicious.
We do reach _S_initialize_once(), but not within the firefox binary: instead, we only ever reach the version exported by liblgpllibs.so.
In fact, liblgpllibs.so is also statically linked with libstdc++, such that firefox and liblgpllibs.so both embed and export their own _S_initialize_once() function.

By default, symbol interposition applies, and _S_initialize_once() should always be called through the procedure linkage table (PLT), so that every module ends up calling the same version of the function.
If symbol interposition were happening here, we would expect that liblgpllibs.so would reach the version of _S_initialize_once() exported by firefox rather than its own, because firefox was loaded first.

So maybe there is no symbol interposition.

This can occur when using -fno-semantic-interposition.

Each version of the standard library would live on its own, independent from the other versions.
But neither the Firefox build system nor the Ubuntu maintainer seem to pass this flag to the compiler.
However, by looking at the disassembly for _S_initialize() and _S_initialize_once(), we can see that the exported global variables (_S_once, _S_classic, _S_global) are subject to symbol interposition:

These accesses all go through the global offset table (GOT), so that every module ends up accessing the same version of the variable.
This seems strange given what we said earlier about _S_initialize_once().
Non-exported global variables (c_locale, c_locale_impl), however, are accessed directly without symbol interposition, as expected.

We now have enough information to explain the crash.

When we reach _S_initialize() in liblgpllibs.so, we actually consume the _S_once that lives in firefox, and initialize the _S_classic and _S_global that live in firefox.
But we initialize them with pointers to well initialized variables c_locale_impl and c_locale that live in liblgpllibs.so!
The variables c_locale_impl and c_locale that live in firefox, however, remain uninitialized.

So if we later reach _S_initialize() in firefox, everything looks as if initialization has happened.
But then we return a reference to the version of c_locale that lives in firefox, and this version has never been initialized.

Boom!

Now the main question is: why do we see interposition occur for _S_once but not for _S_initialize_once()?
If we step back for a minute, there is a fundamental distinction between these symbols: one is a function symbol, the other is a variable symbol.
And indeed, the Firefox build system uses the -Bsymbolic-function flag!

The ld man page describes it as follows:

-Bsymbolic-functions

When creating a shared library, bind references to global function symbols to the definition within the shared library, if any.  This option is only meaningful on ELF platforms which support shared libraries.

As opposed to:

-Bsymbolic

When creating a shared library, bind references to global symbols to the definition within the shared library, if any.  Normally, it is possible for a program linked against a shared library to override the definition within the shared library. This option is only meaningful on ELF platforms which support shared libraries.

Nailed it!

The crash occurs because this flag makes us use a weird variant of symbol interposition, where symbol interposition happens for variable symbols like _S_once and _S_classic but not for function symbols like _S_initialize_once().

This results in a mismatch regarding how we access global variables: exported global variables are unique thanks to interposition, whereas every non-interposed function will access its own version of any non-exported global variable.

With all the knowledge that we have now gathered, it is easy to write a reproducer that does not involve any Firefox code:

/* main.cc */
#include <iostream>
extern void pain();
int main() {
pain();
   std::cout << "[main] " << std::locale::classic().name() <<"\n";
   return 0;
}

/* pain.cc */

#include <iostream>
void pain() {
std::cout << "[pain] " << std::locale::classic().name() <<"\n";
}

# Makefile
all:
   $(CXX) pain.cc -fPIC -shared -o libpain.so -static-libstdc++ -Wl,-Bsymbolic-functions
   $(CXX) main.cc -fPIC -c -o main.o
   $(CC) main.o -fPIC -o main /usr/lib/gcc/x86_64-redhat-linux/13/libstdc++.a -L. -Wl,-rpath=. -lpain -Wl,-Bsymbolic-functions
   ./main

clean:
   $(RM) libpain.so main

Understanding the bug is one step, and solving it is yet another story.
Should it be considered a libstdc++ bug that the code for locales is not compatible with -static-stdlibc++ -Bsymbolic-functions?

It feels like combining these flags is a very nice way to dig our own grave, and that seems to be the opinion of the libstdc++ maintainers indeed.

Overall, perhaps the strangest part of this story is that this combination did not cause any trouble up until now.
Therefore, we suggested to the maintainer of the package to stop using -static-libstdc++.

There are other ways to use a different libstdc++ than available on the system, such as using dynamic linking and setting an RPATH to link with a bundled version.

Doing that allowed them to successfully deploy a fixed version of the package.
A few days after that, with the official release of Firefox 120, we noticed a very significant bump in volume for the same crash signature. Not again!

This time the volume was coming exclusively from users of NixOS 23.05, and it was huge!

After we shared the conclusions from our beta investigation with them, the maintainers of NixOS were able to quickly associate the crash with an issue that had not yet been backported for 23.05 and was causing the compiler to behave like -static-libstdc++.

To avoid such mess in the future, we added detection for this particular setup in Firefox’s configure.

We are grateful to the people who have helped fix this issue, in particular:

  • Rico Tzschichholz (ricotz) who quickly fixed the Ubuntu 18.04 LTS package, and Amin Bandali (bandali) who provided help on the way;
  • Martin Weinelt (hexa) and Artturin for their prompt fixes for the NixOS 23.05 package;
  • Nicolas B. Pierron (nbp) for helping us get started with NixOS, which allowed us to quickly share useful information with the NixOS package maintainers.

 

The post Option Soup: the subtle pitfalls of combining compiler flags appeared first on Mozilla Hacks - the Web developer blog.

Mozilla L10NAdvancing Mozilla’s mission through our work on localization standards

After the previous post highlighting what the Mozilla community and Localization Team achieved in 2023, it’s time to dive deeper on the work the team does in the area of localization technologies and standards.

A significant part of our work on localization at Mozilla happens within the space of Internet standards. We take seriously our commitments that stem from the Mozilla Manifesto:

We are committed to an internet that includes all the peoples of the earth — where a person’s demographic characteristics do not determine their online access, opportunities, or quality of experience.

To us, this means that it’s not enough to strive to improve the localization of our products, but that we need to improve the localizability of the Internet as a whole. We need to take the lessons we are learning from our work on Firefox, Thunderbird, websites, and all our other projects, and make them available to everyone, everywhere.

That’s a pretty lofty goal we’ve set ourselves, but to be fair it’s not just about altruism. With our work on Fluent and DOM Localization, we’re in a position where it would be far too easy to rest on our laurels, and to consider what we have “good enough”. To keep going forward and to keep improving the experiences of our developers and localizers, we need input from the outside that questions our premises and challenges us. One way for us to do that is to work on Internet standards, presenting our case to other experts in the field.

In 2023, a large part of our work on localization standards has been focused on Unicode MessageFormat 2 (aka “MF2”), an upcoming message formatting specification, as well as other specifications building on top of it. Work on this has been ongoing since late 2019, and Mozilla has been one of the core participants from the start. The base MF2 spec is now slated for an initial “technology preview” release as a part of the 2024 Spring’s Unicode CLDR release.

Compared to Fluent, MF2 corresponds to the syntax and formatting of a single message pattern. Separately, we’ve also been working on the syntax and representation of a resource format for messages (corresponding to Fluent’s FTL files), as well as championing JavaScript language proposals for formatting messages and parsing resources. Work on standardizing DOM localization (as in, being able to use just HTML to localize a website) is also getting started in W3C/WHATWG, but its development is contingent on all the preceding specifications reaching a more stable stage.

So, besides the long term goal of improving localization everywhere, what are the practical results of these efforts? The nature of this work is exploratory, so predicting results has not and will not be completely possible. One tangible benefit that we’ve been able to already identify and deploy is a reconsideration of how Fluent messages with internal selectors — like plurals — are presented to localizers: Rather than showing a message in pieces, we’ve adopted the MF2 approach of presenting a message with its selectors (possibly more than one) applying to the whole message. This duplicates some parts of the message, but it also makes it easier to read and to translate via machine translation, as well as ensuring that it is internally consistent across all languages.

Another byproduct of this work is MF2’s message data model: Unlike anything before it, it is capable of representing all messages in all languages in all formats. We are currently refactoring our tools and internal systems around this data model, allowing us to deduplicate file format-specific tooling, making it easier to add new features and support new syntaxes. In Pontoon, this approach already made it easier to introduce syntax highlighting and improve the editing experience for right-to-left scripts. To hear more, you can join us at FOSDEM next month, where we’ll be presenting on this in more detail!

At Mozilla, we do not presume to have all the answers, or to always be right. Instead, we try to share what we have, and to learn from others. With many points of view, we gain greater insights – and we help make the world a better place for all peoples of all demographic characteristics.

SeaMonkeyTeething problems with archives

Hi All,

I am currently fixing a mess with the archives for 2.53.18.1.

There are a lot of extraneous artifacts that were stored there and now I’m cleaning them up.

Thankfully, this will be the last time I’m using this way of pushing to release.

My apologies for the mess.

:ewong

SeaMonkeySeaMonkey 2.53.18.1 updates

Hi All,

Just want to mention that the updates will be available soon.

Thank you for your patience.

:ewong

SeaMonkeySeaMonkey 2.53.18.1 is out!

Hi All,

Happy New Year, everyone!

The SeaMonkey Project is pleased to announce the very first release the year: SeaMonkey 2.53.18.1!  As it is a security fix, please check out [1] and/or [2] for release notes.

Best regards,

:ewong

[1] – https://www.seamonkey-project.org/releases/seamonkey2.53.18.1

[2] – https://www.seamonkey-project.org/releases/2.53.18.1

 

Mozilla L10NMozilla Localization in 2023

A Year in Data

The Mozilla localization community had a busy and productive 2023. Let’s look at some numbers that defined our year:

  • 32 projects and 258 locales set up in Pontoon
  • 3,685 new user registrations
  • 1,254 active users, submitting at least one translation (on average 235 users per month)
  • 432,228 submitted translations
  • 371,644 approved translations
  • 23,866 new strings to translate

Slide summarizing the activity in Pontoon over 2023. It includes the Mozilla Localization team logo (a red and black lion head) and an image of a cartoonish lion cub holding a thank you sign. Data in the slide: * 32 projects and 258 locales set up in Pontoon * 3,685 new user registrations * 1,254 active users, submitting at least one translation (on average 235 users per month) * 432,228 submitted translations * 371,644 approved translations * 23,866 new strings to translateThank you to all the volunteers who contributed to Mozilla’s localization efforts over the last 12 months!

In case you’re curious about the lion theme: localization is often referred to as l10n, a numeronym which looks like the word lion. That’s why our team’s logo is a lion head, stylized as the original Mozilla logo by artist Shepard Fairey.

Pontoon Development

A core area of focus in 2023 was pretranslation. From the start, our goal with this feature was to support the community by making it easier to leverage existing translations and provide a way to bootstrap translation of new content.

When pretranslation is enabled, any new string added in Pontoon will be pretranslated using a 100% match from translation memory or — if no match exists —  we’ll leverage Google AutoML Translation engine with a model custom trained on the existing locale’s translation memory. Translations are stored in Pontoon with a special “pretranslated” status so that localizers can easily find and review them. Pretranslated strings are also saved to repositories (e.g. GitHub), and eventually ship in the product.

You can find more details on how we approached testing and involved the community in this blog post from July. Over the course of 2023 we pretranslated 14,033 strings for 16 locales across 15 projects.

Towards the end of the year, we also worked on two features that have been long requested by users: 1) it’s now possible to use Pontoon with a light theme; and 2) we improved the translation experience on mobile, with the original 3-column layout adapting to smaller screen sizes.

Screenshot of Pontoon's UI with the light theme selected.

Screenshot of Pontoon’s UI with the light theme selected.

Screenshot of Pontoon UI on a smartphone running Firefox for Android

Screenshot of Pontoon UI on a smartphone running Firefox for Android

Listening to user feedback remains our priority: in case you missed it, we have just published the results of a new survey, where we asked localizers which features they would like to see implemented in Pontoon. We look forward to implementing some of your fantastic ideas in 2024!

Community

Community is at the core of Mozilla’s localization model, so it’s crucial to identify sustainability issues as early as possible. Only relying on completion levels, or how quickly a locale can respond to urgent localization requests, are not sufficient inputs to really understand the health of a community. Indeed, an extremely dedicated volunteer can mask deeper problems and these issues only become visible — and urgent — when such a person leaves a project, potentially without a clear succession plan.

To prevent these situations, we’ve been researching ways to measure the health of each locale by analyzing multiple data points — for example, the number of new sign-ups actively contributing to localization and getting reviews from translators and managers — and we’ve started reaching out to specific communities to trial test interventions. With the help of existing locale managers, this resulted in several promotions to translator (Arabic, Czech, German) or even manager (Czech, Russian, Simplified Chinese).

During these conversations with various local communities, we heard loud and clear how important in-person meetings are to understanding what Mozilla is working on, and how interacting with other volunteers and building personal connections is extremely valuable. Over the past few years, some unique external factors — COVID and an economic recession chief among them — made the organization of large scale events challenging. We investigated the feasibility of small-scale, local events organized directly by community members, but this initiative wasn’t successful since it required a significant investment of time and energy by localizers on top of the work they were already doing to support Mozilla with product localization.

To counterbalance the lack of in-person events and keep volunteers in the loop, we organized two virtual fireside chats for localizers in May and November (links to recordings).

What’s coming in 2024

In order to strengthen our connection with existing and potential volunteers, we’re planning to organize regular online events this year. We intend to experiment with different formats and audiences for these events, while also improving our presence on social networks (did you know we’re on Mastodon?). Keep an eye out on this blog and Matrix for more information in the coming months.

As many of you have asked in the past, we also want to integrate email functionalities in Pontoon; users should be able to opt in to receive specific communications via email on top of in-app notifications. We also plan to experiment with automated emails to re-engage inactive users with elevated permissions (translators, managers).

It’s clear that a community can only be sustainable if there are active managers and translators to support new contributors. On one side, we will work to create onboarding material for new volunteers so that existing managers and translators can focus on the linguistic aspects. On the other, we’ll engage the community to discuss a refined set of policies that foster a more inclusive and transparent environment. For example, what should the process be when a locale doesn’t have a manager or active translator, yet there are contributors not receiving reviews? How long should an account retain elevated permissions if it’s apparently gone silent? What are the criteria for promotions to translator or manager roles?

For both initiatives, we will reach out to the community for feedback in the coming months.

As for Pontoon, you can expect some changes under the hood to improve performances and overall reliability, but also new user-facing features (e.g. fine-grained search, better translation memory management).

Thank you!

We want to thank all the volunteers who have dedicated their time and skills to localizing Mozilla products. Your tireless efforts are essential in advancing the Mozilla mission of fostering an open and accessible internet for everyone.

Looking ahead, we are excited about the opportunities that 2024 brings. We look forward to working alongside our community to expand the impact of localization and continue breaking down language barriers. Your support is invaluable, and together, we will continue shaping a more inclusive digital world. Thank you for being an integral part of this journey.

SUMO BlogIntroducing Mandy and Donna

Hey everybody,

I’m so thrilled to start 2024 with good news for you all. Mandy Cacciapaglia and Donna Kelly are joining our Customer Experience team as a Product Support Manager for Firefox and a Content Strategist. Here’s a bit from them both:

Hi there! Mandy here — I am Mozilla’s new Product Support Manager for Firefox. I’m so excited to collaborate with this awesome group, and dive into Firefox reporting, customer advocacy and feedback, and product support so we can keep elevating our amazing browser. I’m based in NYC, and outside of work you will find me watercolor painting, backpacking, or reading mysteries.

Hi everyone! I’m Donna, and I am very happy to be here as your new Content Strategist on the Customer Experience team. I will be working on content strategy to improve our knowledge base, documentation, localization, and overall user experience!In my free time, I love hanging out with my dog (a rescue tri-pawd named Sundae), hiking, reading (big Stephen King fan), playing video games, and anything involving food. Looking forward to getting to know everyone!

You’ll hear more from them in our next community call (which will be on January 17). In the meantime, please join me to congratulate and welcome both of them into the team!

SUMO Blog2023 in a nutshell

Hey SUMO nation,

As we’re inching closer towards 2024, I’d like to take a step back to reflect on what we’ve accomplished in 2023. It’s a lot, so let’s dive in! 

  • Overall pageviews

From Jan 1st to the end of November, we’ve got a total of 255+ million pageviews on SUMO. We’ve been in a consistent pageview number drop since 2018, and this time around, we’re down 7% from last year. This is far from bad, though, as this is our lowest yearly drop since 2018.

  • Forum

In the forum, we’ve seen an average of 2.8k questions per month this year. This is a 6.67% down turn from last year. We also see a downturn in our answer rate within 72 hours, 71% compared to 75% last year. We also see a drop in our solved rate, 10% this year compared to 14% last year. On a typical month, our average contributors on the forum excluding OP is around 200 (compared to 240 last year).

*See Support glossary
  • KB

We see an increase over different metrics on KB contribution this year, though. In total, we’ve got a total of 1990 revisions (14% increase from last year) from 136 non staff members. Our review rate this year is 80%, while our approval rate is 96%, compared to 73% and 95% in 2022). In total, we’ve got 29 non-staff reviewers this year.

  • Localization

On the localization side, the number is overall pretty normal. Total revision is around 13K (same as last year) from 400 non-staff members, with 93% review rate and 99% approval rate (compared to 90% and 99% last year) from a total of 118 non-staff reviewers.

  • Social Support

From year to date, the Social Support contributors have sent a total of 850 responses (compared to 908 last year) and interacted with 1645 conversations. Our resolved rate has dropped to 40.74%, compared to 70% last year. We have made major improvements on other metrics, though. For example, this year, our contributors were responsible for more replies from our total responses (75% in total compared to 39.6% last year). Our conversion rate is also improving from 20% in 2022 to 52% this year. It means, our contributors have taken more role in answering the overall inbounds and have replied more consistently than last year.

  • Mobile Store Support

On the Mobile Store Support side, our contributors this year have contributed to 1260 replies and interacted with 3149 conversations in total. That makes our conversion rate at 36% this year, compared to 46% last year. And those are mostly contributions to non-English reviews.


In addition to the regular contribution, here are some of the community highlights from 2023:

  • We did some internal assessment and external benchmarking in Q1, which informed our experiments in Q2. Learn the results of those experiments from this call.
  • We also updated our contributor guidelines, including article review guidelines and created a new policy around the use of generative AI.
  • By the end of the year, the Spanish community has done something really amazing. They have managed to translate and update 70% of in-product desktop articles (as opposed to 11% when we started the call for help.

We’d also like to take this opportunity to highlight some Customer Experience team’s projects that we’ve tackled this year (some with close involvement and help from the community).

We split this one into two concurrent projects:

  • Phase 1 Navigation Improvements — initial phase aims to:
    • Surface the community forums in a clearer way
    • Streamline the Ask a Question user flow
    • Improve link text and calls-to-action to better match what users might expect when navigating on the site
    • Updates to the main navigation and small changes to additional site UI (like sidebar menus, page headers, etc.) can be expected
  • Cross-system content structure and hierarchy — the goal of this project is to:
    • Improve our ability to gather data metrics across functional areas of SUMO (KB, ticketing, and forums)
    • Improve recommended “next steps” by linking related content across KB and Forums
    • Create opportunities for grouping and presenting content on SUMO by alternate categories and not just by product

Project Background:

    • This research was conducted between August 2023 and November 2023. The goal of this project is to provide actionable insights on how to improve the customer experience of SUMO.
    • Research approach:
      • Stakeholder engagement process
      • Surveyed 786 Mozilla Support users
      • Conducted three rounds of interviews recruited from survey respondents:
        • Sprint 1: Evaluated content and article structure
        • Sprint 2: Evaluated the overall SUMO customer experience
        • Sprint 3: Co-design of an improved SUMO experience
      • This research was conducted by PH1 Research, who have conducted similar research for Mozilla in 2022.
  • Please consider: Participants for this study were recruited via a banner ad in SUMO. As a result, these findings only reflect the experiences and needs of users who actively use SUMO. It does not reflect users who may not be aware of SUMO or have decided not to use it. 

Executive Summary:

  • Users consider SUMO a trustworthy and content-rich resource. SUMO offers resources that can appropriately help users of different technical levels. The most common user flow is via Google search. Very few are logging in to SUMO directly.
  • The goal of SUMO should be to assist Mozilla users to improve their product experience. Content should be consolidated and optimized to show fewer, high quality results on Google search and SUMO search. The article experience should aim to boost relevance and task success. The SUMO website should aid users to diagnose systems, understand problems, find solutions, and discover additional resources when needed.

Recommendations:

  • Our recommendation is that SUMO’s strategy should be to provide a self-service experience that makes users feel that Mozilla cares about their problems and offers a range of solutions appealing to various persona types (technical/non-technical).
  • The pillars for making SUMO valuable to users should be:
    • Confidence: As a user, I need to be confident that the resource provided will resolve my problem.
    • Guidance: As a user, I need to feel guided through the experience of finding a solution, even when I don’t understand the problem or solutions available.
    • Trust: As a user, I need to trust that the resources have been provided by a trustworthy authority on the subject (SUMO scores well here because of Mozilla).
      • Modernizing our CMS can provide significant benefits in terms of user experience, performance, security, flexibility, collaboration, and analytics.
      • This resulted in a decision to move forward with the plan to migrate our CMS to Wagtail — a modern, open-source content management system focused on flexibility and user experience.
      • We are currently in the process of planning the next phases for implementation.
    • Pocket migration to SUMO
      • We successfully migrated and published 100% of previously identified Pocket help center content from HelpScout’s CMS to SUMO’s CMS, with proper redirects in place to ensure a seamless transition for the user.
      • The localization community began efforts to help us localize the content, which had previously only been available in en-US.
    • Firefox account to Mozilla account rebrand in early November.
    • Officially supporting account users and login less support flow (read more about that here).
    • This was a very challenging project, not only because we had to migrate our large codebase and very large data set from MySQL, but also because of the challenge of performing the actual data migration within a reasonable period of time, on the order of a few hours at most, so that we could minimize the disruption to users and contributors. In the end, it was a multi-month project comprising coordinated research, planning and effort between our engineering team and our SRE (Site Reliability Engineering) team. We’re now on a much better database foundation for the future, because:
      • Postgres is better suited for enterprise-level applications like ours, with very large datasets, frequent write operations and complex queries.
      • We can also take advantage of connection pooling via PgBouncer, which will improve our resilience under huge and often malicious traffic spikes (which have been occurring much more frequently during the past year).
      • Last but not least, our database now supports the full unicode character set, which means it can fully handle all characters, including emoji’s , in all languages. Our MySQL database had only limited unicode support, due to its initial configuration, and rather than invest in resolving that, which would have meant a significant chunk of work, we decided to invest instead in Postgres.

This year, you all continue to impress us with the persistence and dedication that you show to Mozilla by contributing to our platform, despite the current state of our world right now. To every single one of you who contributed in one way or another to SUMO, I’d like to express my sincere gratitude because without you all, our platform is just an empty shell. To celebrate this, we’ve prepared this simple dashboard with contribution data that you can filter based on username so you can see how much you’ve accomplished this year (we talked about this in our last community call this year).

Let’s be proud of what we’ve accomplished to keep the internet as a global & public resource for everybody, and let’s keep on rocking the helpful web through 2024 and beyond!

If you’re a looker and interested in contributing to Mozilla Support, please head over to our Contribute page to learn more about our programs!

Mozilla L10N2024 Pontoon survey results

The results from the 2024 Pontoon survey are in and the 3 top-voted features we commit to implement are:

  1. Add ability to edit Translation Memory entries (611 votes).
  2. Improve performance of Pontoon translation workspace and dashboards (603 votes).
  3. Add ability to propose new Terminology entries (595 votes).

The remaining features ranked as follows:

  1. Add ability to preview Fluent strings in the editor (572 votes).
  2. Link project names in Concordance search results to corresponding strings (540 votes).
  3. Add “Copy translation from another locale as suggestion” batch action (523 votes).
  4. Add ability to receive automated notifications via email (521 votes).
  5. Add Timeline tab with activity to Project, Locale, ProjectLocale dashboards (501 votes).
  6. Add ability to read notifications one by one, or mark notifications as unread (495 votes).
  7. Add virtual keyboard with special characters to the editor (469 votes).

We thank everyone who dedicated their time to share valuable responses and suggest potential features for us to consider implementing!

A total of 365 Pontoon users participated in the survey, 169 of which voted on all features. Each user could give each feature 1 to 5 votes. Check out the full report.

We look forward to implementing these new features and working towards a more seamless and efficient translation experience with Pontoon. Stay tuned for updates!

SeaMonkeyUpdates fixed

Hi All,

The updates have been fixed as well well as a lot of the missing files.

Seems like as if I simply cannot handle multiple changes at the same time.

My apologies for the inconveniences caused.

:ewong

 

SeaMonkeyUpdates… erm.. update.

Hi all,

I have taken a look at what’s going on and am a bit puzzled.

  • Linux-i686 locales:
    • Missing: el, en-US, es-AR, es-ES, fi, fr, ka, nb-NO, nl, pl, pt-PT, ru, sk, sv-SE
    • Existing: cs, de, en-GB, hu, it, ja, pt-BR, zh-CN, zh-TW
  • Linux x86-64 locales:
    • Missing: de, el, en-US, es-ES, hu, it, ka, nb-NO, ru, sk, sv-SE, zh-TW
    • Existing: cs, en-GB, es-AR, fi, fr, ja, nl, pl, pt-BR, pt-PT, zh-CN
  • Mac locales:
    • Missing: cs, en-US, es-AR, fr, pt-BR, sk, zh-CN
    • Existing: de, el, en-GB, es-ES, fi, hu, it, ja-JP-mac, ka, nb-NO, nl, pl, pt-PT, ru, sv-SE, zh-TW
  • Win32 Locales:
    • Missing: cs, de, fi, nl, pl, pt-PT, ru, sv-SE
    • Existing: el, en-GB, en-US, es-AR, es-ES, fr, hu, it, ja, ka, nb-NO, pt-BR, sk, zh-CN, zh-TW
  • Win64 locales:
    • Missing: cs, de, en-GB, en-US, fr, it, ja, pl, pt-BR
    • Existing: el, es-AR, es-ES, fi, hu, ka, nb-NO, nl, pt-PT, ru, sk, sv-SE, zh-CN, zh-TW

No, I have no understand of the pattern of missing files.

So I’ll be changing the updates to using the ‘old’ place while I fix the ‘new’ place. (*wink*)

:ewong

SeaMonkeyMigration away from archive.mozilla.org addendum

Hi All,

In my previous blog post on the SeaMonkey Project migrating away from archive.mozilla.org, it seems as there is some misunderstanding in the wording(I’ve just changed it on the request of Mozilla).

When I stated “We need to stop using archive.mozilla.org” and “They will most likely be left as is until Mozilla blows it away (or I do).”,  I literally meant “We” as in “the SeaMonkey Project”.

So in essence, what I *was* trying to state (and failing miserably) is that “The SeaMonkey Project needs to migrate away from archive.mozilla.org.”   After 2023, when you go to https://archive.mozilla.org/pub/”, you will not see seamonkey there.

End of an era.

:ewong

 

 

 

SeaMonkeyUpdates issue

Hi All,

It seems like as if there are some missing updates and I’m currently working on it.

Sorry for the inconvenience.

:ewong

 

hacks.mozilla.orgPuppeteer Support for the Cross-Browser WebDriver BiDi Standard

We are pleased to share that Puppeteer now supports the next-generation, cross-browser WebDriver BiDi standard. This new protocol makes it easy for web developers to write automated tests that work across multiple browser engines.

How Do I Use Puppeteer With Firefox?

The WebDriver BiDi protocol is supported starting with Puppeteer v21.6.0. When calling puppeteer.launch pass in "firefox" as the product option, and "webDriverBiDi" as the protocol option:

const browser = await puppeteer.launch({
  product: 'firefox',
  protocol: 'webDriverBiDi',
})

You can also use the "webDriverBiDi" protocol when testing in Chrome, reflecting the fact that WebDriver BiDi offers a single standard for modern cross-browser automation.

In the future we expect "webDriverBiDi" to become the default protocol when using Firefox in Puppeteer.

Doesn’t Puppeteer Already Support Firefox?

Puppeteer has had experimental support for Firefox based on a partial re-implementation of the proprietary Chrome DevTools Protocol (CDP). This approach had the advantage that it worked without significant changes to the existing Puppeteer code. However the CDP implementation in Firefox is incomplete and has significant technical limitations. In addition, the CDP protocol itself is not designed to be cross browser, and undergoes frequent breaking changes, making it unsuitable as a long-term solution for cross-browser automation.

To overcome these problems, we’ve worked with the WebDriver Working Group at the W3C to create a standard automation protocol that meets the needs of modern browser automation clients: this is WebDriver BiDi. For more details on the protocol design and how it compares to the classic HTTP-based WebDriver protocol, see our earlier posts.

As the standardization process has progressed, the Puppeteer team has added a WebDriver BiDi backend in Puppeteer, and provided feedback on the specification to ensure that it meets the needs of Puppeteer users, and that the protocol design enables existing CDP-based tooling to easily transition to WebDriver BiDi. The result is a single protocol based on open standards that can drive both Chrome and Firefox in Puppeteer.

Are All Puppeteer Features Supported?

Not yet; WebDriver BiDi is still a work in progress, and doesn’t yet cover the full feature set of Puppeteer.

Compared to the Chrome+CDP implementation, there are some feature gaps, including support for accessing the cookie store, network request interception, some emulation features, and permissions. These features are actively being standardized and will be integrated as soon as they become available. For Firefox, the only missing feature compared to the Firefox+CDP implementation is cookie access. In addition, WebDriver BiDi already offers improvements, including better support for multi-process Firefox, which is essential for testing some websites. More information on the complete set of supported APIs can be found in the Puppeteer documentation, and as new WebDriver-BiDi features are enabled in Gecko we’ll publish details on the Firefox Developer Experience blog.

Nevertheless, we believe that the WebDriver-based Firefox support in Puppeteer has reached a level of quality which makes it suitable for many real automation scenarios. For example at Mozilla we have successfully ported our Puppeteer tests for pdf.js from Firefox+CDP to Firefox+WebDriver BiDi.

Is Firefox’s CDP Support Going Away?

We currently don’t have a specific timeline for removing CDP support. However, maintaining multiple protocols is not a good use of our resources, and we expect WebDriver BiDi to be the future of remote automation in Firefox. If you are using the CDP support outside of the context of Puppeteer, we’d love to hear from you (see below), so that we can understand your use cases, and help transition to WebDriver BiDi.

Where Can I Provide Feedback?

For any issues you experience when porting Puppeteer tests to BiDi, please open issues in the Puppeteer issue tracker, unless you can verify the bug is in the Firefox implementation, in which case please file a bug on Bugzilla.

If you are currently using CDP with Firefox, please join the #webdriver matrix channel so that we can discuss your use case and requirements, and help you solve any problems you encounter porting your code to WebDriver BiDi.

Update: The Puppeteer team have published “Harness the Power of WebDriver BiDi: Chrome and Firefox Automation with Puppeteer“.

The post Puppeteer Support for the Cross-Browser WebDriver BiDi Standard appeared first on Mozilla Hacks - the Web developer blog.

SeaMonkeySeaMonkey 2.53.18 is now out!

Hi All,

The SeaMonkey project is pleased to announce the immediate release of 2.53.18 version of this long standing Internet Suite.

Please check out [1] and/or [2].  Also note, the updates should be up now.

:ewong

[1] – https://www.seamonkey-project.org/releases/seamonkey2.53.18

[2] – https://www.seamonkey-project.org/releases/2.53.18

SUMO BlogWhat’s up with SUMO – Q4 2023

Hi everybody,

The last part of our quarterly update in 2023 come early with this post. That means, we won’t get the data from December just yet (but we’ll make sure to update the post later). Lots of updates after the last quarter so let’s just dive in!

Welcome note and shout-outs from Q4

If you know anyone that we should feature here, please contact Kiki and we’ll make sure to add them in our next edition.

Community news

  • Kiki back from maternity leave and Sarto bid her farewell, all happened in this quarter.
  • We have a new contributor policy around the use of generative AI tools. This was one of the things that Sarto initiated back then so I’d like to give the credit to her. Please take some time to read and familiarize yourself with the policy.
  • Spanish contributors are pushing really hard to help localize the in-product and top articles for the Firefox Desktop. I’m so proud that at the moment, 57.65% of Firefox Desktop in-product articles have been translated & updated to Spanish (compared to 11.8% when we started) and 80% of top 50 articles are localized and updated to Spanish. Huge props to those who I mentioned in the shout-outs section above.
  • We’ve got new locale leaders for Catalan and Indonesian (as I mentioned above). Please join me to congratulate Handi S & Carlos Tomás for their new role!
  • The Customer Experience team is officially moved out from the Marketing org to the Strategy and Operations org led by Suba Vasudevan (more about that in our community meeting in Dec).
  • We’ve migrated Pocket support platform (used to be under Help Scout) to SUMO. That means, Pocket help articles are now available on Mozilla Support, and people looking for Pocket premium support can also ask a question through SUMO.
  • Firefox account is transitioned to Mozilla account in early November this year. Read this article to learn more about the background for this transition.
  • We did a SUMO sprint for the Review checker feature with the release of Firefox 119, even though we couldn’t find lots of chatter about it.
  • Please check out this thread to learn more about recent platform fixes and improvements (including the use of emoji! )
  • We’ve also updated and moved Kitsune documentation to GitHub page recently. Check out this thread to learn more.

Catch up

  • Watch the monthly community call if you haven’t. Learn more about what’s new in October, November, and December! Reminder: Don’t hesitate to join the call in person if you can. We try our best to provide a safe space for everyone to contribute. You’re more than welcome to lurk in the call if you don’t feel comfortable turning on your video or speaking up. If you feel shy to ask questions during the meeting, feel free to add your questions on the contributor forum in advance, or put them in our Matrix channel, so we can answer them during the meeting. First time joining the call? Check out this article to get to know how to join. 
  • If you’re an NDA’ed contributor, you can watch the recording of the Customer Experience weekly scrum meeting from AirMozilla to catch up with the latest product updates.
  • Consider subscribe to Firefox Daily Digest to get daily updates about Firefox from across different platforms.

Check out SUMO Engineering Board to see what the platform team is currently doing and submit a report through Bugzilla if you want to report a bug/request for improvement.

Community stats

KB

KB pageviews (*)

* KB pageviews number is a total of KB pageviews for /en-US/ only
Month Page views Vs previous month
Oct 2023 7,061,331 9.36%
Nov 2023 6,502,248 -7.92%
Dec 2023 TBD TBD

Top 5 KB contributors in the last 90 days: 

KB Localization

Top 10 locales based on total page views

Locale Oct 2023 

pageviews (*)

Nov 2023 pageviews (*) Dec 2023 

pageviews (*)

Localization progress (per Dec, 7)(**)
de 10.66% 10.97% TBD 93%
fr 7.10% 7.23% TBD 80%
zh-CN 6.84% 6.81% TBD 92%
es 5.59% 5.49% TBD 27%
ja 5.10% 4.72% TBD 33%
ru 3.67% 3.8% TBD 88%
pt-BR 3.30% 3.11% TBD 43%
It 2.52% 2.48% TBD 96%
zh-TW 2.42% 2.61% TBD 2%
pl 2.13% 2.11% TBD 83%
* Locale pageviews is an overall pageviews from the given locale (KB and other pages)

** Localization progress is the percentage of localized article from all KB articles per locale

Top 5 localization contributors in the last 90 days: 

Forum Support

Forum stats

Month Total questions Answer rate within 72 hrs Solved rate within 72 hrs Forum helpfulness
Oct 2023 3,897 66.33% 10.01% 59.68%
Nov 2023 2,660 64.77% 9.81% 65.74%
Dec 2023 TBD TBD TBD TBD

Top 5 forum contributors in the last 90 days: 

Social Support

Channel Total tweets Total moderation by contributors Total reply by contributors Respond conversion rate
Oct 2023 311 209 132 63.16%
Nov 2023 245 137 87 63.50%
Dec 2023 TBD TBD TBD TBD

Top 5 Social Support contributors in the past 3 months: 

  1. Tim Maks 
  2. Wim Benes
  3. Daniel B
  4. Philipp T
  5. Pierre Mozinet

Play Store Support

Firefox for Android only

Channel Total reviews Total conv interacted by contributors Total conv replied by contributors
Oct 2023 6,334 45 18
Nov 2023 6,231 281 75
Dec 2023

Top 5 Play Store contributors in the past 3 months: 

Product updates

To catch up on product releases update, please watch the recording of the Customer Experience scrum meeting from AirMozilla. You can also subscribe to the AirMozilla folder by clickling on the Subscribe button at the top right corner of the page to get notifications each time we add a new recording.

Useful links:

 

Web Application SecurityMozilla VPN Security Audit 2023

To provide transparency into our ongoing efforts to protect your privacy and security on the Internet, we are releasing a security audit of Mozilla VPN that Cure53 conducted earlier this year.

The scope of this security audit included the following products:

  • Mozilla VPN Qt6 App for macOS
  • Mozilla VPN Qt6 App for Linux
  • Mozilla VPN Qt6 App for Windows
  • Mozilla VPN Qt6 App for iOS
  • Mozilla VPN Qt6 App for Android

Here’s a summary of the items discovered within this security audit that the auditors rated as medium or higher severity:

  • FVP-03-003: DoS via serialized intent 
      • Data received via intents within the affected activity should be validated to prevent the Android app from exposing certain activities to third-party apps.
      • There was a risk that a malicious application could leverage this weakness to crash the app at any time.
      • This risk was addressed by Mozilla and confirmed by Cure53.
  • FVP-03-008: Keychain access level leaks WG private key to iCloud 
      • Cure53 confirmed that this risk has been addressed due to an extra layer of encryption, which protects the Keychain specifically with a key from the device’s secure enclave.
  • FVP-03-009: Lack of access controls on daemon socket
      • Access controls to guarantee that the user sending commands to the daemon was permitted to initiate the intended action needs to be implemented.
      • This risk has been addressed by Mozilla and confirmed by Cure53.
  • FVP-03-010: VPN leak via captive portal detection 
      • Cure53 advised that the captive portal detection feature be turned off by default to prevent an opportunity for IP leakage when using maliciously set up WiFi hotspots.
      • Mozilla addressed the risk by no longer pinging for a captive portal outside of the VPN tunnel.
  • FVP-03-011: Lack of local TCP server access controls
      • The VPN client exposes a local TCP interface running on port 8754, which is bound to localhost. Users on localhost can issue a request to the port and disable the VPN.
      • Mozilla addressed this risk as recommended by Cure53.
  • FVP-03-012: Rogue extension can disable VPN using mozillavpnnp (High)
      • mozillavpnnp does not sufficiently restrict the application caller.
      • Mozilla addressed this risk as recommended by Cure53.

If you’d like to read the detailed report from Cure53, including all low and informational items, you can find it here.

 

The post Mozilla VPN Security Audit 2023 appeared first on Mozilla Security Blog.

hacks.mozilla.orgFirefox Developer Edition and Beta: Try out Mozilla’s .deb package!

A month ago, we introduced our Nightly package for Debian-based Linux distributions. Today, we are proud to announce we made our .deb package available for Developer Edition and Beta!

We’ve set up a new APT repository for you to install Firefox as a .deb package. These packages are compatible with the same Debian and Ubuntu versions as our traditional binaries.

Your feedback is invaluable, so don’t hesitate to report any issues you encounter to help us improve the overall experience.

Adopting Mozilla’s Firefox .deb package offers multiple benefits:

  • you will get better performance thanks to our advanced compiler-based optimizations,
  • you will receive the latest updates as fast as possible because the .deb is integrated into Firefox’s release process,
  • you will get hardened binaries with all security flags enabled during compilation,
  • you can continue browsing after upgrading the package, meaning you can restart Firefox at your convenience to get the latest version.
To set up the APT repository and install the Firefox .deb package, simply follow these steps:
<code># Create a directory to store APT repository keys if it doesn't exist:
sudo install -d -m 0755 /etc/apt/keyrings

# Import the Mozilla APT repository signing key:
wget -q <a class="c-link" href="https://packages.mozilla.org/apt/repo-signing-key.gpg" target="_blank" rel="noopener noreferrer" data-stringify-link="https://packages.mozilla.org/apt/repo-signing-key.gpg" data-sk="tooltip_parent">https://packages.mozilla.org/apt/repo-signing-key.gpg</a> -O- | sudo tee /etc/apt/keyrings/packages.mozilla.org.asc > /dev/null

# The fingerprint should be 35BAA0B33E9EB396F59CA838C0BA5CE6DC6315A3
gpg -n -q --import --import-options import-show /etc/apt/keyrings/packages.mozilla.org.asc | awk '/pub/{getline; gsub(/^ +| +$/,""); print "\n"$0"\n"}'

# Next, add the Mozilla APT repository to your sources list:
echo "deb [signed-by=/etc/apt/keyrings/packages.mozilla.org.asc] <a class="c-link" href="https://packages.mozilla.org/apt" target="_blank" rel="noopener noreferrer" data-stringify-link="https://packages.mozilla.org/apt" data-sk="tooltip_parent">https://packages.mozilla.org/apt</a> mozilla main" | sudo tee -a /etc/apt/sources.list.d/mozilla.list > /dev/null

# Update your package list and install the Firefox .deb package:
sudo apt-get update && sudo apt-get install firefox-beta  # Replace "beta" by "devedition" for Developer Edition
And that’s it! You have now installed the latest Firefox Beta/Developer Edition .deb package on your Linux.
Firefox supports more than a hundred different locales. The packages mentioned above are in American English, but we have also created .deb packages containing the Firefox language packs. To install a specific language pack, replace fr in the example below with the desired language code:
sudo apt-get install firefox-beta-l10n-fr
To list all the available language packs, you can use this command after adding the Mozilla APT repository and running sudo apt-get update:
apt-cache search firefox-beta-l10n

The post Firefox Developer Edition and Beta: Try out Mozilla’s .deb package! appeared first on Mozilla Hacks - the Web developer blog.

Mozilla L10NVote for new Pontoon features

It’s been a while since we have asked Pontoon users what new features should we develop, which is why we have decided to run another survey now.

But first, let’s take a look at the top-voted features from the last round that are all live now:

  1. Provide new contributors with guidelines before adding their first suggestion (details).
  2. Notify suggestion authors when their suggestions get reviewed (details).
  3. Pre-fill editor with 100% Translation Memory matches when available (details).

In addition to those, we also implemented a couple of features that didn’t make it into top 3:

  • Expose managers on team dashboards to help users get in touch with them easily (details).
  • Add a light theme (details).

You asked, we listened! 🙂

2024 Survey

It’s now time to vote again! We’re working on Pontoon Roadmap for 2024 and we commit to implement at least 3 top-voted features by Pontoon users.

Please let us know by December 11 how important for you are the features listed below in this quick 5-minute survey:

  • Add virtual keyboard with special characters to the editor, customizable per locale (details).
  • Add “Copy translation from another locale as suggestion” batch action (details).
  • Link project names in Concordance search results to their corresponding strings (details).
  • Add ability to edit Translation Memory entries (details).
  • Add ability to propose new Terminology entries (details).
  • Improve overall performance of Pontoon translation workspace and dashboards (details).
  • Add ability to preview Fluent strings in the editor (details).
  • Add ability to receive automated notifications via email (details).
  • Add ability to read notifications one by one, or mark notifications as unread (details).
  • Add Timeline tab with activity to Project, Locale, ProjectLocale dashboards (details).

Note that at the end of the survey you will be able to add your own ideas, which you are always welcome to submit on GitHub.

hacks.mozilla.orgIntroducing llamafile

A special thanks to Justine Tunney of the Mozilla Internet Ecosystem (MIECO), who co-authored this blog post.

Today we’re announcing the first release of llamafile and inviting the open source community to participate in this new project.

llamafile lets you turn large language model (LLM) weights into executables.

Say you have a set of LLM weights in the form of a 4GB file (in the commonly-used GGUF format). With llamafile you can transform that 4GB file into a binary that runs on six OSes without needing to be installed.

This makes it dramatically easier to distribute and run LLMs. It also means that as models and their weights formats continue to evolve over time, llamafile gives you a way to ensure that a given set of weights will remain usable and perform consistently and reproducibly, forever.

We achieved all this by combining two projects that we love: llama.cpp (a leading open source LLM chatbot framework) with Cosmopolitan Libc (an open source project that enables C programs to be compiled and run on a large number of platforms and architectures). It also required solving several interesting and juicy problems along the way, such as adding GPU and dlopen() support to Cosmopolitan; you can read more about it in the project’s README.

This first release of llamafile is a product of Mozilla’s innovation group and developed by Justine Tunney, the creator of Cosmopolitan. Justine has recently been collaborating with Mozilla via MIECO, and through that program Mozilla funded her work on the 3.0 release  (Hacker News discussion) of Cosmopolitan. With llamafile, Justine is excited to be contributing more directly to Mozilla projects, and we’re happy to have her involved.

llamafile is licensed Apache 2.0, and we encourage contributions. Our changes to llama.cpp itself are licensed MIT (the same license used by llama.cpp itself) so as to facilitate any potential future upstreaming. We’re all big fans of llama.cpp around here; llamafile wouldn’t have been possible without it and Cosmopolitan.

We hope llamafile is useful to you and look forward to your feedback.

 

 

The post Introducing llamafile appeared first on Mozilla Hacks - the Web developer blog.