HRIS admin to the stars, and documentation writer for needy open-source projects. As a Chicagoan, I also know enough to not put ketchup on a hot dog.
26 stories
·
2 followers

Victory at the Fourth Circuit: Court of Appeals allows Wikimedia Foundation v. NSA to proceed

2 Shares

Photo by Blogtrepreneur, CC BY 2.0.

Today, the Fourth Circuit Court of Appeals in Richmond, Virginia, handed down its decision in Wikimedia Foundation v. National Security Agency, holding that the Wikimedia Foundation may further pursue our claims against the United States National Security Agency (NSA) and other defendants. This marks an important victory for the privacy and free expression rights of Wikimedia users.

We joined eight co-plaintiffs in filing this lawsuit in March 2015, to challenge government mass surveillance and stand up for the privacy and free expression rights of Wikimedia users. The lawsuit specifically targets the NSA’s Upstream surveillance practices, which capture communications crossing the internet backbone. The free exchange of knowledge is threatened when Wikimedia users fear being watched as they search, read, or edit the Wikimedia projects.

Back in October 2015, Judge T.S. Ellis III of the United States District Court for the District of Maryland dismissed the case for lack of standing, a legal concept referring to a plaintiff’s ability to demonstrate that they have suffered an injury that the courts can redress. We promptly appealed the case to the Fourth Circuit.

The Fourth Circuit’s decision is complex: the Court vacated the lower court’s ruling with respect to the Wikimedia Foundation, and remanded the case back to the District of Maryland for further proceedings. A 2-1 majority found that the Wikimedia Foundation demonstrated standing in the case, but that the other plaintiffs did not. The dissenting judge would have found that all nine plaintiffs had standing. We, our co-plaintiffs, and our counsel at the American Civil Liberties Union (ACLU), will carefully review the opinion and determine the next steps for our case.

This marks an important step forward in Wikimedia Foundation v. NSA, and a victory for upholding the rights of privacy and free expression for Wikimedia users. We stand ready to continue this fight. A more detailed blog post, with further information about the case and opinion is forthcoming, and we will keep members of the Wikimedia communities updated on the lawsuit. For more information about mass surveillance, Wikimedia Foundation v. NSA, and our other efforts to protect user privacy, please see our resources page about the case, or visit the ACLU.

Jim Buatti, Legal Counsel
Aeryn Palmer, Legal Counsel
Wikimedia Foundation

Special thanks to all who have supported us in this litigation, including the ACLU’s Patrick Toomey, Alex Abdo, and Ashley Gorski; and Aarti Reddy, Patrick Gunn, and Ben Kleine of our pro bono counsel Cooley, LLP; and the Wikimedia Foundation’s Zhou Zhou.







Read the whole story
j1mcamp
2518 days ago
reply
Chicago, IL, USA
greggrossmeier
2522 days ago
reply
Ojai, CA, US
Share this story
Delete

Aggregate Blogs with Pelican

1 Share

I use Pelican for my website, including this blog, and I love it. So when the time came to start working on our professional website, it was the obvious choice.

One thing we wanted was a page that aggregates our blogs. A view on what the individual members of the collective do.

We thought about deploying a planet like Venus, but that would have implied essentially doing the theming work twice: once for the website, and once for the planet.

The rest of our website is made with Pelican, so we figured it would be nicer to use something that integrates with that. And so I just wrote a small Pelican plugin which does just that: pelican-planet.

It's quite simple to use, just install it:

$ pip install pelican-planet

Then, in your pelicanconf.py file, enable the plugin and declare the feeds you want to aggregate:

PLUGINS = [
    ...
    'pelican_planet',
    ...
    ]

PLANET_FEEDS = {
    'Some amazing blog': 'https://example1.org/feeds/blog.atom.xml',
    'Another great blog': 'https://example2.org/feeds/blog.atom.xml',
    ...
    }

Now you need to write a Jinja2 template for the planet page. A simple one to generate a Markdown page could be:

title: The Fantastic Planet
slug: planet

Some blogs aggregated here.

{% for article in articles %}
# {{ article.title }}

{% endfor %}

Finally, declare your template as well as the path to the generated page:

PLANET_TEMPLATE = 'content/planet.md.tmpl'
PLANET_PAGE = 'content/planet.md'

Now, when you rebuild your website, the pelican-planet plugin will download the feeds, and generate the page out of the template, then Pelican will build the website, including that page, as if it was one you had written normally.

You'll probably want to setup a systemd timer or a cron job to rebuild your website regularly, so that the planet page gets refreshed with new articles from the aggregated feeds.

Each article object will have a title, an author, a link to the original post, a summary, and a couple of other attributes.

Of course the planet template can be much more complex than the above trivial example. For example, we're not using Markdown but directly HTML as that allows us to really do anything we could possibly want with the result. Here's the relevant part of our template:

{% for article in articles %}
  <div class="article">
    <h2><a href="{{ article.link }}">{{ article.title }}</a></h2>
    <p class="article-metadata">{{ article.author }}&nbsp;{{ article.updated.strftime("%Y-%m-%d") }}</p>
    <div class="article-summary">
      {{ article.summary }}
    </div>
  </div>
  {% if not loop.last %}
    <hr>
  {% endif %}
{% endfor %}

Hopefully this can be useful to somebody else, but if not we're already happily using it over at Kymeria.fr.

Do let us know if you deploy it, report issues or feature requests, and send merge requests to make it better suited to your use case.

Read the whole story
j1mcamp
2530 days ago
reply
Chicago, IL, USA
Share this story
Delete

Uniting API documentation and code: InfoQ article

1 Share

At the beginning of this year, I worked hard to summarize my thoughts on API documentation, continuous publishing, and technical accuracy for developer documentation. The result is an article on InfoQ.com, edited by Deepak Nadig, who also was forward-thinking in having me speak to a few teams at Intuit about API documentation coupled with code.

Always Be Publishing: Continuous Integration & Collaboration in Code Repositories for REST API Docs

Here are the key takeaways from the article.

Key Takeaways

  • API documentation provides a critical path for predicting customer success.
  • Collaborating on documentation close to the code provides better reviews of both the code and the document files, more automation efficiencies, and enables quality tests for documentation specifically.
  • Provide common documentation frameworks, standards, automation, and tools to give teams an efficiency boost.

If you have a story to tell about CI/CD for API docs, please send a Pull Request on GitHub to tell your story on http://docslikecode.com.

Read the whole story
j1mcamp
2542 days ago
reply
Chicago, IL, USA
Share this story
Delete

Richard Hughes: Reverse engineering ComputerHardwareIds.exe with winedbg

1 Share

In an ideal world vendors could use the same GUID value for hardware matching in Windows and Linux firmware. When installing firmware and drivers in Windows vendors can always use some generated HardwareID GUIDs that match useful things like the BIOS vendor and the product SKU. It would make sense to use the same scheme as Microsoft. There are a few issues in an otherwise simple plan.

The first, solved with a simple kernel patch I wrote (awaiting review by Jean Delvare), exposes a few more SMBIOS fields into /sys/class/dmi/id that are required for the GUID calculation.

The second problem is a little more tricky. We don’t actually know how Microsoft joins the strings, what encoding is used, or more importantly the secret namespace UUID used to seed the GUID. The only thing we have got is the closed source ComputerHardwareIds.exe program in the Windows DDK. This, luckily, runs in Wine although Wine isn’t able to get the system firmware data itself. This can be worked around, and actually makes testing easier.

So, some research. All we know from the MSDN page is that Each hardware ID string is converted into a GUID by using the SHA-1 hashing algorithm which actually tells us quite a bit. Generating a GUID from a SHA-1 hash means this has to be a type 5 UUID.

The reference code for a type-5 UUID is helpfully available in the IETF RFC document so it’s quite quick to get started with research. From a few minutes of searching online, the most likely symbols the program will be using are the BCrypt* set of functions. From the RFC code, we call the checksum generation update function with first the encoded namespace (aha!) and then the encoded joined string (ahaha!). For Win32 programs, BCryptHashData is the function we want to trace.

So, to check:

wine /home/hughsie/ComputerHardwareIds.exe /mfg "To be filled by O.E.M."

…matches the reference HardwareID-14 output from Microsoft. So onto debugging, using +relay shows all the calling values and return values from each Win32 exported symbol:

WINEDEBUG=+relay winedbg --gdb ~/ComputerHardwareIds.exe
Wine-gdb> b BCryptHashData
Wine-gdb> r ~/ComputerHardwareIds.exe /mfg "To be filled by O.E.M." /family "To be filled by O.E.M."
005b:Call bcrypt.BCryptHashData(0011bab8,0033fcf4,00000010,00000000) ret=0100699d
Breakpoint 1, 0x7ffd85f8 in BCryptHashData () from /lib/wine/bcrypt.dll.so
Wine-gdb> 

Great, so this is the secret namespace. The first parameter is the context, the second is the data address, the third is the length (0x10 as a length is indeed SHA-1) and the forth is the flags — so lets print out the data so we can see what it is:

Wine-gdb> x/16xb 0x0033fcf4
0x33fcf4:	0x70	0xff	0xd8	0x12	0x4c	0x7f	0x4c	0x7d
0x33fcfc:	0x00	0x00	0x00	0x00	0x00	0x00	0x00	0x00

Using either the uuid in python, or uuid_unparse in libuuid, we can format the namespace to 70ffd812-4c7f-4c7d-0000-000000000000 — now this doesn’t look like a randomly generated UUID to me! Onto the next thing, the encoding and joining policy:

Wine-gdb> c
005f:Call bcrypt.BCryptHashData(0011bb90,00341458,0000005a,00000000) ret=010069b3
Breakpoint 1, 0x7ffd85f8 in BCryptHashData () from /lib/wine/bcrypt.dll.so
Wine-gdb> x/90xb 0x00341458
0x341458:	0x54	0x00	0x6f	0x00	0x20	0x00	0x62	0x00
0x341460:	0x65	0x00	0x20	0x00	0x66	0x00	0x69	0x00
0x341468:	0x6c	0x00	0x6c	0x00	0x65	0x00	0x64	0x00
0x341470:	0x20	0x00	0x62	0x00	0x79	0x00	0x20	0x00
0x341478:	0x4f	0x00	0x2e	0x00	0x45	0x00	0x2e	0x00
0x341480:	0x4d	0x00	0x2e	0x00	0x26	0x00	0x54	0x00
0x341488:	0x6f	0x00	0x20	0x00	0x62	0x00	0x65	0x00
0x341490:	0x20	0x00	0x66	0x00	0x69	0x00	0x6c	0x00
0x341498:	0x6c	0x00	0x65	0x00	0x64	0x00	0x20	0x00
0x3414a0:	0x62	0x00	0x79	0x00	0x20	0x00	0x4f	0x00
0x3414a8:	0x2e	0x00	0x45	0x00	0x2e	0x00	0x4d	0x00
0x3414b0:	0x2e	0x00
Wine-gdb> q

So there we go. The encoding looks like UTF-16 (as expected, much of the Windows API is this way) and the joining character seems to be &.

I’ve written some code in fwupd so that this happens:

$ fwupdmgr hwids
Computer Information
--------------------
BiosVendor: LENOVO
BiosVersion: GJET75WW (2.25 )
Manufacturer: LENOVO
Family: ThinkPad T440s
ProductName: 20ARS19C0C
ProductSku: LENOVO_MT_20AR_BU_Think_FM_ThinkPad T440s
EnclosureKind: 10
BaseboardManufacturer: LENOVO
BaseboardProduct: 20ARS19C0C

Hardware IDs
------------
{c4159f74-3d2c-526f-b6d1-fe24a2fbc881}   <- Manufacturer + Family + ProductName + ProductSku + BiosVendor + BiosVersion + BiosMajorRelease + BiosMinorRelease
{ff66cb74-5f5d-5669-875a-8a8f97be22c1}   <- Manufacturer + Family + ProductName + BiosVendor + BiosVersion + BiosMajorRelease + BiosMinorRelease
{2e4dad4e-27a0-5de0-8e92-f395fc3fa5ba}   <- Manufacturer + ProductName + BiosVendor + BiosVersion + BiosMajorRelease + BiosMinorRelease
{3faec92a-3ae3-5744-be88-495e90a7d541}   <- Manufacturer + Family + ProductName + ProductSku + BaseboardManufacturer + BaseboardProduct
{660ccba8-1b78-5a33-80e6-9fb8354ee873}   <- Manufacturer + Family + ProductName + ProductSku
{8dc9b7c5-f5d5-5850-9ab3-bd6f0549d814}   <- Manufacturer + Family + ProductName
{178cd22d-ad9f-562d-ae0a-34009822cdbe}   <- Manufacturer + ProductSku + BaseboardManufacturer + BaseboardProduct
{da1da9b6-62f5-5f22-8aaa-14db7eeda2a4}   <- Manufacturer + ProductSku
{059eb22d-6dc7-59af-abd3-94bbe017f67c}   <- Manufacturer + ProductName + BaseboardManufacturer + BaseboardProduct
{0cf8618d-9eff-537c-9f35-46861406eb9c}   <- Manufacturer + ProductName
{f4275c1f-6130-5191-845c-3426247eb6a1}   <- Manufacturer + Family + BaseboardManufacturer + BaseboardProduct
{db73af4c-4612-50f7-b8a7-787cf4871847}   <- Manufacturer + Family
{5e820764-888e-529d-a6f9-dfd12bacb160}   <- Manufacturer + EnclosureKind
{f8e1de5f-b68c-5f52-9d1a-f1ba52f1f773}   <- Manufacturer + BaseboardManufacturer + BaseboardProduct
{6de5d951-d755-576b-bd09-c5cf66b27234}   <- Manufacturer

Which basically matches the output of ComputerHardwareIds.exe on the same hardware. If the kernel patch gets into the next release I’ll merge the fwupd branch to master and allow vendors to start using the Microsoft HardwareID GUID values.

Read the whole story
j1mcamp
2550 days ago
reply
Chicago, IL, USA
Share this story
Delete

Dave Neary: Of humans and feelings

1 Share

It was a Wednesday morning. I just connected to email, to realise that something was wrong with the developer web site. People had been having issues accessing content, and they were upset. What started with “what’s wrong with Trac?” quickly escalated to “this is just one more symptom of how The Company doesn’t care about us community members”.

As I investigated the problem, I realised something horrible. It was all my fault.

I had made a settings change in the Trac instance the night before – attempting to impose some reason and structure in ACLs that had grown organically over time – and had accidentally removed a group, containing a number of community members not working for The Company, from having the access they had.

Oh, crap.

After the panic and cold sweats died down, I felt myself getting angry. These were people who knew me, who I had worked alongside for months, and yet the first reaction for at least a few of them was not to assume this was an honest mistake. It was to go straight to conspiracy theory. This was conscious, deliberate, and nefarious. We may not understand why it was done, but it’s obviously bad, and reflects the disdain of The Company.

Had I not done enough to earn people’s trust?

So I fixed the problem, and walked away. “Don’t respond in anger”, I told myself. I got a cup of coffee, talked about it with someone else, and came back 5 minutes later.

“Look at it from their side”, I said – before I started working with The Company, there had been a strained relationship with the community. Yes, they knew Dave Neary wouldn’t screw them over, but they had no way of knowing that it was Dave Neary’s mistake. I stopped taking it personally. There is deep-seated mistrust, and that takes time to heal, I said to myself.

Yet, how to respond on the mailing list thread? “We apologise for the oversight, blah blah blah” would be interpreted as “of course they fixed it, after they were caught”. But did I really want to put myself out there and admit I had made what was a pretty rookie mistake? Wouldn’t that undermine my credibility?

In the end, I bit the bullet. “I did some long-overdue maintenance on our Trac ACLs yesterday, they’re much cleaner and easier to maintain now that we’ve moved to more clearly defined roles. Unfortunately, I did not test the changes well enough before pushing them live, and I temporarily removed access from all non-The Company employees. It’s fixed now. I messed up, and I am sorry. I will be more careful in the future.” All first person – no hiding behind the corporate identity, no “we stand together”, no sugar-coating.

What happened next surprised me. The most vocal critic in the thread responded immediately to apologise, and to thank me for the transparency and honesty. Within half an hour, a number of people were praising me and The Company for our handling of the incident. The air went out of the outrage balloon, and a potential disaster became a growth opportunity – yes, the people running the community infrastructure are human too, and there is no conspiracy. The Man was not out to get us.

I no longer work for The Company, and the team has scattered to the winds. But I never forgot those cold sweats, that feeling of vulnerability, and the elation that followed the community reaction to a heartfelt mea culpa.

Part of the OSS Communities series – difficult conversations. Contribute your stories and tag them on Twitter with #osscommunities to be included.

Read the whole story
j1mcamp
2550 days ago
reply
Chicago, IL, USA
Share this story
Delete

The new contribution workflow for GNOME

1 Share
The GNOME Project has announced a streamlined contribution system built around a Flatpak-based build system. "No specific distribution required. No specific version required. No dependencies hell. Reproducible, if it builds for me it will build for you. All with an UI and integrated, no terminal required. Less than five minutes of downloading plus building and you are contributing."
Read the whole story
j1mcamp
2566 days ago
reply
Chicago, IL, USA
Share this story
Delete
Next Page of Stories