You are here

Feed aggregator

Updated pkg://localhostoih/sfe/library/gnu/openldap/src

SFE OpenIndiana Hipster - Wed, 12/19/2018 - 00:24
pkg://localhostoih/sfe/library/gnu/openldap/src@2.4.45,5.11-0.2017.0.0.5:20181218T101451Z, a new version of an existing package, was added to the repository.
Categories: SFE

Added pkg://localhostoih/components/library/libsodium

SFE OpenIndiana Hipster - Wed, 12/19/2018 - 00:24
pkg://localhostoih/components/library/libsodium@1.0.13,5.11-0.2017.0.0.5:20181218T215452Z was added to the repository.
Categories: SFE

Added pkg://localhostoih/components/library/libsodium/src

SFE OpenIndiana Hipster - Wed, 12/19/2018 - 00:24
pkg://localhostoih/components/library/libsodium/src@1.0.13,5.11-0.2017.0.0.5:20181218T215323Z was added to the repository.
Categories: SFE

Updated pkg://localhostoih/sfe/library/gnu/openldap

SFE OpenIndiana Hipster - Wed, 12/19/2018 - 00:24
pkg://localhostoih/sfe/library/gnu/openldap@2.4.45,5.11-0.2017.0.0.5:20181218T104121Z, a new version of an existing package, was added to the repository.
Categories: SFE

Updated pkg://localhostoih/sfe/library/gnu/openldap

SFE OpenIndiana Hipster - Wed, 12/19/2018 - 00:24
pkg://localhostoih/sfe/library/gnu/openldap@2.4.45,5.11-0.2017.0.0.5:20181218T101828Z, a new version of an existing package, was added to the repository.
Categories: SFE

A EULA in FOSS clothing?

The Observation Deck - Bryan Cantrill - Mon, 12/17/2018 - 04:01

There was a tremendous amount of reaction to and discussion about my blog entry on the midlife crisis in open source. As part of this discussion on HN, Jay Kreps of Confluent took the time to write a detailed response — which he shortly thereafter elevated into a blog entry.

Let me be clear that I hold Jay in high regard, as both a software engineer and an entrepreneur — and I appreciate the time he took to write a thoughtful response. That said, there are aspects of his response that I found troubling enough to closely re-read the Confluent Community License — and that in turn has led me to a deeply disturbing realization about what is potentially going on here.

Here is what Jay said that I found troubling:

The book analogy is not accurate; for starters, copyright does not apply to physical books and intangibles like software or digital books in the same way.

Now, what Jay said is true to a degree in that (as with many different kind of expression), software has code specific to it; this can be found in 17 U.S.C. § 117. But the fact that Jay also made reference to digital books was odd; digital books really have nothing to do with software (or not any more so than any other kind of creative expression). That said, digital books and proprietary software do actually share one thing in common, though it’s horrifying: in both cases their creators have maintained that you don’t actually own the copy you paid for. That is, unlike a book, you don’t actually buy a copy of a digital book, you merely acquire a license to use their book under their terms. But how do they do this? Because when you access the digital book, you click “agree” on a license — an End User License Agreement (EULA) — that makes clear that you don’t actually own anything. The exact language varies; take (for example) VMware’s end user license agreement:

2.1 General License Grant. VMware grants to You a non-exclusive, non-transferable (except as set forth in Section 12.1 (Transfers; Assignment) license to use the Software and the Documentation during the period of the license and within the Territory, solely for Your internal business operations, and subject to the provisions of the Product Guide. Unless otherwise indicated in the Order, licenses granted to You will be perpetual, will be for use of object code only, and will commence on either delivery of the physical media or the date You are notified of availability for electronic download.

That’s a bit wordy and oblique; in this regard, Microsoft’s Windows 10 license is refreshingly blunt:

(2)(a) License. The software is licensed, not sold. Under this agreement, we grant you the right to install and run one instance of the software on your device (the licensed device), for use by one person at a time, so long as you comply with all the terms of this agreement.

That’s pretty concise: “The software is licensed, not sold.” So why do this at all? EULAs are an attempt to get out of copyright law — where the copyright owner is quite limited in the rights afforded to them as to how the content is consumed — and into contract law, where there are many fewer such limits. And EULAs have accordingly historically restricted (or tried to restrict) all sorts of uses like benchmarking, reverse engineering, running with competitive products (or, say, being used by a competitor to make competitive products), and so on.

Given the onerous restrictions, it is not surprising that EULAs are very controversial. They are also legally dubious: when you are forced to click through or (as it used to be back in the day) forced to unwrap a sealed envelope on which the EULA is printed to get to the actual media, it’s unclear how much you are actually “agreeing” to — and it may be considered a contract of adhesion. And this is just one of many legal objections to EULAs.

Suffice it to say, EULAs have long been considered open source poison, so with Jay’s frightening reference to EULA’d content, I went back to the Confluent Community License — and proceeded to kick myself for having missed it all on my first quick read. First, there’s this:

This Confluent Community License Agreement Version 1.0 (the “Agreement”) sets forth the terms on which Confluent, Inc. (“Confluent”) makes available certain software made available by Confluent under this Agreement (the “Software”). BY INSTALLING, DOWNLOADING, ACCESSING, USING OR DISTRIBUTING ANY OF THE SOFTWARE, YOU AGREE TO THE TERMS AND CONDITIONS OF THIS AGREEMENT. IF YOU DO NOT AGREE TO SUCH TERMS AND CONDITIONS, YOU MUST NOT USE THE SOFTWARE. IF YOU ARE RECEIVING THE SOFTWARE ON BEHALF OF A LEGAL ENTITY, YOU REPRESENT AND WARRANT THAT YOU HAVE THE ACTUAL AUTHORITY TO AGREE TO THE TERMS AND CONDITIONS OF THIS AGREEMENT ON BEHALF OF SUCH ENTITY.

You will notice that this looks nothing like any traditional source-based license — but it is exactly the kind of boilerplate that you find on EULAs, terms-of-service agreements, and other contracts that are being rammed down your throat. And then there’s this:

1.1 License. Subject to the terms and conditions of this Agreement, Confluent hereby grants to Licensee a non-exclusive, royalty-free, worldwide, non-transferable, non-sublicenseable license during the term of this Agreement to: (a) use the Software; (b) prepare modifications and derivative works of the Software; (c) distribute the Software (including without limitation in source code or object code form); and (d) reproduce copies of the Software (the “License”).

On the one hand looks like the opening of open source licenses like (say) the Apache Public License (albeit missing important words like “perpetual” and “irrevocable”), but the next two sentences are the difference that are the focus of the license:

Licensee is not granted the right to, and Licensee shall not, exercise the License for an Excluded Purpose. For purposes of this Agreement, “Excluded Purpose” means making available any software-as-a-service, platform-as-a-service, infrastructure-as-a-service or other similar online service that competes with Confluent products or services that provide the Software.

But how can you later tell me that I can’t use my copy of the software because it competes with a service that Confluent started to offer? Or is that copy not in fact mine? This is answered in section 3:

Confluent will retain all right, title, and interest in the Software, and all intellectual property rights therein.

Okay, so my copy of the software isn’t mine at all. On the one hand, this is (literally) proprietary software boilerplate — but I was given the source code and the right to modify it; how do I not own my copy? Again, proprietary software is built on the notion that — unlike the book you bought at the bookstore — you don’t own anything: rather, you license the copy that is in fact owned by the software company. And again, as it stands, this is dubious, and courts have ruled against “licensed, not sold” software. But how can a license explicitly allow me to modify the software and at the same time tell me that I don’t own the copy that I just modified?! And to be clear: I’m not asking who owns the copyright (that part is clear, as it is for open source) — I’m asking who owns the copy of the work that I have modified? How can one argue that I don’t own the copy of the software that I downloaded, modified and built myself?!

This prompts the following questions, which I also asked Jay via Twitter:

  1. If I git clone software covered under the Confluent Community License, who owns that copy of the software?

  2. Do you consider the Confluent Community License to be a contract?
  3. Do you consider the Confluent Community License to be a EULA?

To Confluent: please answer these questions, and put the answers in your FAQ. Again, I think it’s fine for you to be an open core company; just make this software proprietary and be done with it. (And don’t let yourself be troubled about the fact that it was once open source; there is ample precedent for reproprietarizing software.) What I object to (and what I think others object to) is trying to be at once open and proprietary; you must pick one.

To GitHub: Assuming that this is in fact a EULA, I think it is perilous to allow EULAs to sit in public repositories. It’s one thing to have one click through to accept a license (though again, that itself is dubious), but to say that a git clone is an implicit acceptance of a contract that happens to be sitting somewhere in the repository beggars belief. With efforts like choosealicense.com, GitHub has been a model in guiding projects with respect to licensing; it would be helpful for GitHub’s counsel to weigh in on their view of this new strain of source-available proprietary software and the degree to which it comes into conflict with GitHub’s own terms of service.

To foundations concerned with software liberties, including the Apache Foundation, the Linux Foundation, the Free Software Foundation, the Electronic Frontier Foundation, the Open Source Initiative, and the Software Freedom Conservancy: the open source community needs your legal review on this! I don’t think I’m being too alarmist when I say that this is potentially a dangerous new precedent being set; it would be very helpful to have your lawyers offer their perspectives on this, even if they disagree with one another. We seem to be in some terrible new era of frankenlicenses, where the worst of proprietary licenses are bolted on to the goodwill created by open source licenses; we need your legal voices before these creatures destroy the village!

Categories: Personal Blogs

Open source confronts its midlife crisis

The Observation Deck - Bryan Cantrill - Sat, 12/15/2018 - 08:50

Midlife is tough: the idealism of youth has faded, as has inevitably some of its fitness and vigor. At the same time, the responsibilities of adulthood have grown: the kids that were such a fresh adventure when they were infants and toddlers are now grappling with their own transition into adulthood — and you try to remind yourself that the kids that you have sacrificed so much for probably don’t actually hate your guts, regardless of that post they just liked on the ‘gram. Making things more challenging, while you are navigating the turbulence of teenagers, your own parents are likely entering life’s twilight, needing help in new ways from their adult children. By midlife, in addition to the singular joys of life, you have also likely experienced its terrible sorrows: death, heartbreak, betrayal. Taken together, the fading of youth, the growth in responsibility and the endurance of misfortune can lead to cynicism or (worse) drastic and poorly thought-out choices. Add in a little fear of mortality and some existential dread, and you have the stuff of which midlife crises are made…

I raise this not because of my own adventures at midlife, but because it is clear to me that open source — now several decades old and fully adult — is going through its own midlife crisis. This has long been in the making: for years, I (and others) have been critical of service providers’ parastic relationship with open source, as cloud service providers turn open source software into a service offering without giving back to the communities upon which they implicitly depend. At the same time, open source has been (rightfully) entirely unsympathetic to the proprietary software models that have been burned to the ground — but also seemingly oblivious as to the larger economic waves that have buoyed them.

So it seemed like only a matter of time before the companies built around open source software would have to confront their own crisis of confidence: open source business models are really tough, selling software-as-a-service is one of the most natural of them, the cloud service providers are really good at it — and their commercial appetites seem boundless. And, like a new cherry red two-seater sports car next to a minivan in a suburban driveway, some open source companies are dealing with this crisis exceptionally poorly: they are trying to restrict the way that their open source software can be used. These companies want it both ways: they want the advantages of open source — the community, the positivity, the energy, the adoption, the downloads — but they also want to enjoy the fruits of proprietary software companies in software lock-in and its concomitant monopolistic rents. If this were entirely transparent (that is, if some bits were merely being made explicitly proprietary), it would be fine: we could accept these companies as essentially proprietary software companies, albeit with an open source loss-leader. But instead, these companies are trying to license their way into this self-contradictory world: continuing to claim to be entirely open source, but perverting the license under which portions of that source are available. Most gallingly, they are doing this by hijacking open source nomenclature. Of these, the laughably named commons clause is the worst offender (it is plainly designed to be confused with the purely virtuous creative commons), but others (including CockroachDB’s Community License, MongoDB’s Server Side Public License, and Confluent’s Community License) are little better. And in particular, as it apparently needs to be said: no, “community” is not the opposite of “open source” — please stop sullying its good name by attaching it to licenses that are deliberately not open source! But even if they were more aptly named (e.g. “the restricted clause” or “the controlled use license” or — perhaps most honest of all — “the please-don’t-put-me-out-of-business-during-the-next-reInvent-keynote clause”), these licenses suffer from a serious problem: they are almost certainly asserting rights that the copyright holder doesn’t in fact have.

If I sell you a book that I wrote, I can restrict your right to read it aloud for an audience, or sell a translation, or write a sequel; these restrictions are rights afforded the copyright holder. I cannot, however, tell you that you can’t put the book on the same bookshelf as that of my rival, or that you can’t read the book while flying a particular airline I dislike, or that you aren’t allowed to read the book and also work for a company that competes with mine. (Lest you think that last example absurd, that’s almost verbatim the language in the new Confluent Community (sic) License.) I personally think that none of these licenses would withstand a court challenge, but I also don’t think it will come to that: because the vendors behind these licenses will surely fear that they wouldn’t survive litigation, they will deliberately avoid inviting such challenges. In some ways, this netherworld is even worse, as the license becomes a vessel for unverifiable fear of arbitrary liability.

Legal dubiousness aside, as with that midlife hot rod, the licenses aren’t going to address the underlying problem. To be clear, the underlying problem is not the licensing, it’s that these companies don’t know how to make money — they want open source to be its own business model, and seeing that the cloud service providers have an entirely viable business model, they want a piece of the action. But as a result of these restrictive riders, one of two things will happen with respect to a cloud services provider that wants to build a service offering around the software:

  1. The cloud services provider will build their service not based on the software, but rather on another open source implementation that doesn’t suffer from the complication of a lurking company with brazenly proprietary ambitions.

  2. The cloud services provider will build their service on the software, but will use only the truly open source bits, reimplementing (and keeping proprietary) any of the surrounding software that they need.

In the first case, the victory is strictly pyrrhic: yes, the cloud services provider has been prevented from monetizing the software — but the software will now have less of the adoption that is the lifeblood of a thriving community. In the second case, there is no real advantage over the current state of affairs: the core software is still being used without the open source company being explicitly paid for it. Worse, the software and its community have been harmed: where one could previously appeal to the social contract of open source (namely, that cloud service providers have a social responsibility to contribute back to the projects upon which they depend), now there is little to motivate such reciprocity. Why should the cloud services provider contribute anything back to a company that has declared war on it? (Or, worse, implicitly accused it of malfeasance.) Indeed, as long as fights are being picked with them, cloud service providers will likely clutch their bug fixes in the open core as a differentiator, cackling to themselves over the gnarly race conditions that they have fixed of which the community is blissfully unaware. Is this in any way a desired end state?

So those are the two cases, and they are both essentially bad for the open source project. Now, one may notice that there is a choice missing, and for those open source companies that still harbor magical beliefs, let me put this to you as directly as possible: cloud services providers are emphatically not going to license your proprietary software. I mean, you knew that, right? The whole premise with your proprietary license is that you are finding that there is no way to compete with the operational dominance of the cloud services providers; did you really believe that those same dominant cloud services providers can’t simply reimplement your LDAP integration or whatever? The cloud services providers are currently reproprietarizing all of computing — they are making their own CPUs for crying out loud! — reimplementing the bits of your software that they need in the name of the service that their customers want (and will pay for!) won’t even move the needle in terms of their effort.

Worse than all of this (and the reason why this madness needs to stop): licenses that are vague with respect to permitted use are corporate toxin. Any company that has been through an acquisition can speak of the peril of the due diligence license audit: the acquiring entity is almost always deep pocketed and (not unrelatedly) risk averse; the last thing that any company wants is for a deal to go sideways because of concern over unbounded liability to some third-party knuckle-head. So companies that engage in license tomfoolery are doing worse than merely not solving their own problem: they are potentially poisoning the wellspring of their own community.

So what to do? Those of us who have been around for a while — who came up in the era of proprietary software and saw the merciless transition to open source software — know that there’s no way to cross back over the Rubicon. Open source software companies need to come to grips with that uncomfortable truth: their business model isn’t their community’s problem, and they should please stop trying to make it one. And while they’re at it, it would be great if they could please stop making outlandish threats about the demise of open source; they sound like shrieking proprietary software companies from the 1990s, warning that open source will be ridden with nefarious backdoors and unspecified legal liabilities. (Okay, yes, a confession: just as one’s first argument with their teenager is likely to give their own parents uncontrollable fits of smug snickering, those of us who came up in proprietary software may find companies decrying the economically devastating use of their open source software to be amusingly ironic — but our schadenfreude cups runneth over, so they can definitely stop now.)

So yes, these companies have a clear business problem: they need to find goods and services that people will exchange money for. There are many business models that are complementary with respect to open source, and some of the best open source software (and certainly the least complicated from a licensing drama perspective!) comes from companies that simply needed the software and open sourced it because they wanted to build a community around it. (There are many examples of this, but the outstanding Envoy and Jaeger both come to mind — the former from Lyft, the latter from Uber.) In this regard, open source is like a remote-friendly working policy: it’s something that you do because it makes economic and social sense; even as it’s very core to your business, its not a business model in and of itself.

That said, it is possible to build business models around the open source software that is a company’s expertise and passion! Even though the VC that led the last round wants to puke into a trashcan whenever they hear it, business models like “support”, “services” and “training” are entirely viable! (That’s the good news; the bad news is that they may not deliver the up-and-to-the-right growth that these companies may have promised in their pitch deck — and they may come at too low a margin to pay for large teams, lavish perks, or outsized exits.) And of course, making software available as a service is also an entirely viable business model — but I’m pretty sure they’ve heard about that one in the keynote.

As part of their quest for a business model, these companies should read Adam Jacob’s excellent blog entry on sustainable free and open source communities. Adam sees what I see (and Stephen O’Grady sees and Roman Shaposhnik sees), and he has taken a really positive action by starting the Sustainable Free and Open Source Communities project. This project has a lot to be said for it: it explicitly focuses on building community; it emphasizes social contracts; it seeks longevity for the open source artifacts; it shows the way to viable business models; it rejects copyright assignment to a corporate entity. Adam’s efforts can serve to clear our collective head, and to focus on what’s really important: the health of the communities around open source. By focusing on longevity, we can plainly see restrictive licensing as the death warrant that it is, shackling the fate of a community to that of a company. (Viz. after the company behind AGPL-licensed RethinkDB capsized, it took the Linux Foundation buying the assets and relicensing them to rescue the community.) Best of all, it’s written by someone who has built a business that has open source software at its heart. Adam has endured the challenges of the open core model, and is refreshingly frank about its economic and psychic tradeoffs. And if he doesn’t make it explicit, Adam’s fundamental optimism serves to remind us, too, that any perceived “danger” to open source is overblown: open source is going to endure, as no company is going to be able to repeal the economics of software. That said, as we collectively internalize that open source is not a business model on its own, we will likely see fewer VC-funded open source companies (though I’m honestly not sure that that’s a bad thing).

I don’t think that it’s an accident that Adam, Stephen, Roman and I see more or less the same thing and are more or less the same age: not only have we collectively experienced many sides of this, but we are at once young enough to still recall our own idealism, yet old enough to know that coercion never endures in the limit. In short, this too shall pass — and in the end, open source will survive its midlife questioning just as people in midlife get through theirs: by returning to its core values and by finding rejuvenation in its communities. Indeed, we can all find solace in the fact that while life is finite, our values and our communities survive us — and that our engagement with them is our most important legacy.

Categories: Personal Blogs

Golang sync.Cond vs. Channel...

/dev/dump - Garrett D'Amore - Mon, 12/10/2018 - 21:47
The backstory here is that mostly I love the Go programming language.

But I've been very dismayed by certain statements from some of the core Go team members about topics that have significant ramification for my concurrent application design.  Specifically, bold statements to the effect that "channels" are the way to write concurrent programs, and deemphasizing condition variables.  (In one case, there is even a proposal to remove condition variables entirely from Go2!)

The Go Position
Essentially, the Go team believes very strongly in a design principle that can be stated thusly:

"Do not communicate by sharing memory; instead, share memory by communicating."

This design principle underlies the design of channels, which behave very much like UNIX pipes, although there are some very surprising semantics associated with channels, which I have found limiting over the years.  More on that below.

Certainly, if you can avoid having shared memory state, but instead pass your entire state between cooperating parties, this leads to a simpler, lock free (sort of -- channels have their own locks under the hood!) design.  When your work is easily expressed as a pattern of pipelines, this is a better design.

The Real World
The problem is that sometimes (frequently in the real world) your design cannot be expressed this way.   Imagine a game engine, dealing with events from the network,  multiple players, input sources, physics, modeling, etc.  One simple design is to use a single engine model, with a single go routine, and have events come in via many channels.  Then you have to create a giant select loop to consume events.  This is typical of large event driven systems.

There are some problems with this model.


  1.  Adding channels dynamically just isn't really possible, because you have a single hard coded select loop.  Which means you can't always cope with changes in the real world.   (For example, if you have a channel for inputs, what happens when someone plugs in a new controller?)
  2. Any processing that has to be done on your common state needs to be in that giant event loop.  For example, updates to lighting effects because of an in game event like a laser beam needs to know lots of things about the model -- the starting point of the laser beam, the position of any possible objects in the path of the laser, and so forth.  And then this can update the state model with things like whether the beam hit an object, causing a player kill, etc.
  3. Consequently, it is somewhere between difficult and impossible to really engage multiple CPU cores in this model.  (Modern multithreaded games may have an event loop, but they will also make heavy use of locks to access shared state, in order to permit physics calculations and such to be done in parallel with other tasks.)


So in the real world, we sometimes have to share memory still.

Limitations of Channels
There are some other specific problems with channels as well.


  • Closed channels cannot be closed again (panic if you do), and writing to a closed channel panics. 
  • This means that you cannot easily use go channels with multiple writers.  Instead, you have to orchestrate closing the channel with some other outside synchronization primitive, such as a mutex and flag, or a wait group.)  This semantic also means that close() is not idempotent.  That's a really unfortunate design choice.
  • It's not possible to broadcast to multiple readers simultaneously with a channel other than by closing it.  For example, if I am going to want to wake a bunch of readers simultaneously (such as to notify multiple client applications about a significant change in a global status), I have no easy way to do that.  I either need to have separate channels for each waiter, or I need to hack together something else (for example adding a mutex, and allocating a fresh replacement channel each time I need to do a broadcast.  The mutex has to be used so that waiters know to rescan for a changed channel, and to ensure that if there are multiple signalers, I don't wake them all.)
  • Channels are slow.  More correctly, select with multiple channels is slow.  This means that designs where I have multiple potential "wakers" (for different events) require the use of separate channels, with separate cases within a select statement.  For performance sensitive work, the cost of adding a single additional case to a select statement was found to be quite measurable.
There are other things about channels that are unfortunate (for example no way to peek, or to return an object to channel), but not necessarily fatal.

What does concern me is the false belief that I think the Go maintainers are expressing, that channels are a kind of panacea for concurrency problems.

Can you convert any program that uses shared state into one that uses channels instead?  Probably.

Would you want to?  No.  For many kinds of problems, the constructs you have to create to make this work, such as passing around channels of channels, allocating new channels for each operation, etc. are fundamentally harder to understand, less performant, and more fragile than a simpler design making use of a single mutex and a condition variable would be.

Others have written on this as well.

Channels Are Not A Universal Cure
It has been said before that the Go folks are guilty of ignoring the work that has been done in operating systems for the past several decades (or maybe rather of being guilty of NIH). I believe that the attempt to push channels as the solution over all others is another sign of this.  We (in the operating system development community) have ample experience using threads (true concurrency), mutexes, and condition variables to solve large numbers of problems with real concurrency for decades, and doing so scalably

It takes a lot of hubris for the Golang team to say we've all been doing it wrong the entire time.  Indeed, if you look for condition variables in the implementation of the standard Go APIs, you will find them.  Really, this is a tool in the toolbox, and a useful one, and I personally find it a bit insulting that the Go team seems to treat this as a tool with sharp edges with which I can't really be trusted.

I also think there is a recurring disease in our industry to try to find a single approach as a silver bullet for all problems -- and this is definitely a case in point.  Mature software engineers understand that there are many different problems, and different tools to solve them, and should be trusted to understand when a certain tool is or is not appropriate.

Categories: Personal Blogs

Visiting Chaos Communication Congress #35C3 in Leipzig, Germany

SFE Articles - Sun, 12/09/2018 - 23:02

Update: Finally I've got my ticket for 35C3 - I'm looking forward to see you there! On congress you can reach me on DECT phone by dialing extention "tomw" or 8669 on your phone. You know you can reach the extensions as well from the outside by prepending the congress phone number (will be announced) in case you don't have a DECT or GSM extension registered.

Bying a ticket for Chaos Communication Congress #35C3 is a bit like an art and it is a social thing too.
Of you are stuck deeply in the C-Code when hacking you'll miss the starting time for the ticket sale, for sure.
But fortunatly my local crowd was perfectly prepared and I've got a ticket via them.

If you want to meet in person, then write a comment or send me a direct message to http://twitter.com/sfepackages

I would be very happy to discuss everything from ZFS to OpenIndiana, Solaris, OmniOS and packaging efforts.
Or you want to talk about SFE-packages for Sparc CPUs?

See you soon at #35C3 Dec 27th - Dec 30th 2019!

Regards,
Thomas

Tags: #35C3Chaos Communication CongressLeipzig2019
Categories: SFE

Go modules, so much promise, so much busted

/dev/dump - Garrett D'Amore - Sat, 12/01/2018 - 21:33
Folks who follow me may know that Go is one of my favorite programming languages.  The ethos of Go has historically been closer to that of C, but seems mostly to try to improve on the things that make C less than ideal for lots of projects.
One of the challenges that Go has always had is it's very weak support for versioning, dependency management, and vendoring.  The Go team's historic promise and premise (called the Go1 Promise) was that the latest version in any repo should always be preferred. This has a few ramifications:
  • No breaking changes permitted in a library, or package, ever.
  • The package should be "bug-free" at master.  (I.e. regression free.)
  • The package should live forever.

For small projects, these are noble goals, but over time it's been well demonstrated that this doesn't work. APIs just too often need to evolve (perhaps to correct earlier mistakes) in incompatible ways. Sometimes its easier to discard an older API than to update it to support new directions.
Various 3rd party solutions, such as gopkg.in, have been offered to deal with this, by providing some form of semantic versioning support.
Recently, go1.11 was released with an opt-in new feature called "modules".  The premise here is to provide a way for packages to manage dependencies, and to break away from the historic pain point of $GOPATH.
Unfortunately, with go modules, they have basically tossed the Go1 promise out the window. 
Packages that have a v2 in their import URL (like my mangos version 2 package) are assumed to have certain layouts, and are required to have a new go.mod module file to be importable in any project using modules.  This is a new, unannounced requirement, and it broke my project from being used with any other code that wants to use modules.  (As of right now this is still broken.)
At the moment, I'm believing that there is no way to correct my repository so that it will be importable by both old code, and new code, using the same import URL.  The "magical" handling of "v2" in the import path seems to preclude this.  (I suspect that I probably need different contradictory lines in my HTML file that I use to pass "go-imports", depending on whether someone is using the new style go modules, or the old style $GOPATH imports.)
The old way of looking at vendored code is no longer used.  (You can opt-in to it if you want still.)
It's entirely unclear how godoc is meant to operate the presence of modules.  I was trying to setup a new repo for a v2 that might be module safe, but I have no idea how to direct godoc at a specific branch.  Google and go help doc were unhelpful in explaining this.
This is all rather frustrating, because getting away from $GOPATH seems to be such a good thing.
At any rate, it seems that go's modules are not yet fully baked.  I hope that they figure out a way for existing packages to automatically be supported without requiring reorganizing repos.  (I realize that this is already true for most packages, but for some -- like my mangos/v2 package -- that doesn't seem to hold true).
Categories: Personal Blogs

Fix cryptography build after setuptools update

github/OpenIndiana/oi-userland - Sat, 11/17/2018 - 20:17
Fix cryptography build after setuptools update Based on following solaris-userland commit: From ca77d72dc246de31f961b0e00bce66cb1ad986ff Mon Sep 17 00:00:00 2001 From: Libor Bukata <Libor.Bukata@oracle.com> Date: Wed, 3 Oct 2018 08:35:53 +0100 Subject: [PATCH] 28732185 Missing support for python ABI3 compliant extensions
Categories: oi-userland

Don't add incorporate dependency on l10n-incorporation to userland-in…

github/OpenIndiana/oi-userland - Fri, 11/16/2018 - 23:10
Don't add incorporate dependency on l10n-incorporation to userland-incorporation
Categories: oi-userland

Quick links to IPS Repositories

SFE Articles - Thu, 11/15/2018 - 02:22

(seriously, do *not* mirror these repositories - instead help me build a release repo which is HA and can be mirrored)

Browse

By clicking on the URL you can browse available packages for your OS:
(X86 only! Donate a SPARC build zone? Well there is something in preparation, ask for updates on this!)

Solaris 11.3 IPS Packages Repository (517)
http://sfe.opencsw.org/localhosts11

Solaris 11.4 (was Solaris 12 internal) IPS Packages Repository (291)
http://sfe.opencsw.org/localhosts12

OpenIndiana OI151a8 (and a9) IPS Packages Respository (333)
http://sfe.opencsw.org/localhostoi151a8

OmniOS IPS Packages Repository (192)
http://sfe.opencsw.org/localhostomnios

OpenIndiana Hipster (moving target) IPS Packages Repository (234)
http://sfe.opencsw.org/localhostoih

discontinued: testing repo localhosts12nogccdeps -> please use localhosts12 now.

Publisher

To configure those repos as publishers use: (pick only the publisher matching your OS)
pfexec pkg set-publisher -G '*' -g http://sfe.opencsw.org/localhosts11 localhosts11  #S11.3
pfexec pkg set-publisher -G '*' -g http://sfe.opencsw.org/localhosts12 localhosts12  #S12 #S11.4
pfexec pkg set-publisher -G '*' -g http://sfe.opencsw.org/localhostoi151a8 localhostoi151a8  #old OpenIndiana
pfexec pkg set-publisher -G '*' -g http://sfe.opencsw.org/localhostomnios localhostomnios  #OmniOS
pfexec pkg set-publisher -G '*' -g http://sfe.opencsw.org/localhostoih localhostoih  #new OpenIndiana Hipster

These repositories are 1:1 copies from the build machines, so please expect contents to change frequently and as well, the content may have bugs. Drop me a note (in the comments, or by email) if a package works (yeah, we like that!) or breaks for you (we need to know that).
One day there will be a process which establishes a voting system. It should be used to promote packages from the development repositories to a public release repository. If you have ideas, how a test / promotion process should look like, please let me know (in the comments or by email).
If you are a programmer who can create or at least design a web workflow for the package promotion, then please get in contact very soon to discuss possibilities.
What we need as well for SFE are volunteers for testing the packages. Would you like to be one?
 

What is it good for

These are development repositories but good for normal use.
For production use, you can compile packages yourself, or, help us building up a "release" repository with QA and a voting website to get packages promoted to "release".

Mirroring the Repository?

Please do *not* mirror these development repositories. You can download the package and prerequisite packages with special switch "-r" to the pkgrecv command. That saves capacities on the server. I'm sure you understand that mirroring 18Gbytes for the whole repository is not the right thing to do.
Once a release repository exists, you are welcome to mirror this (though we understand that it only makes sense in very, very rare cases).
Thank you!

Notes

Please keep in mind: You can install any exact package version if you specify the *complete* FMRI listed including the timestamp.
Example:
pkg list -avf postfix
[...]
pkg://localhosts11/sfe/service/network/smtp/postfix@3.0.3,5.11-0.0.175.3.1.0.5.0:20160108T10293
[...]
To install exaclty that version and no other, use:
pfexec pkg install -v pkg://localhosts11/sfe/service/network/smtp/postfix@3.0.3,5.11-0.0.175.3.1.0.5.0:20160108T10293

This way, you could install exactly an *older* or a *newer* package then the resolver would auto-select for you. This enables you to help yourself, in case we have a broken package in these build repositories. You simply install another version which is working for you.

Questions?

Drop me an email at sfepackages at g mail dot com - or create a useraccount here on the blog and leave me a comment.
Need additonal packages? Put them on the package wishlist here http://sfe.opencsw.org/wishlist!

Thank you for using SFE packages and even more, for spreading the word!

 

Updates: 2016-11-22 -> 20170117 remove old s12nogccdeps repo, wording about volunteering for web workflow package promotion
20181115: change name from S12 (development) to S11.4, change S11 to S11.3

Tags: IPS Package Repository Solaris 11 Solaris 12 OpenIndiana Hipster OmniOS
Categories: SFE

diffstat: bump to 1.62

github/OpenIndiana/oi-userland - Wed, 11/14/2018 - 20:39
diffstat: bump to 1.62
Categories: oi-userland

cscope: bump to 15.9

github/OpenIndiana/oi-userland - Wed, 11/14/2018 - 20:37
cscope: bump to 15.9
Categories: oi-userland

itstool: bump to 2.0.4

github/OpenIndiana/oi-userland - Wed, 11/14/2018 - 20:36
itstool: bump to 2.0.4
Categories: oi-userland

fontconfig: apply latest bug fixes

github/OpenIndiana/oi-userland - Wed, 11/14/2018 - 20:33
fontconfig: apply latest bug fixes
Categories: oi-userland

font.mk: rebase issue

github/OpenIndiana/oi-userland - Wed, 11/14/2018 - 20:32
font.mk: rebase issue
Categories: oi-userland

Add idnkit 2.3 (from oracle-userland)

github/OpenIndiana/oi-userland - Tue, 11/13/2018 - 19:23
Add idnkit 2.3 (from oracle-userland)
Categories: oi-userland

xvidcore: bump to 1.3.5

github/OpenIndiana/oi-userland - Sun, 11/11/2018 - 02:44
xvidcore: bump to 1.3.5
Categories: oi-userland

Pages

Subscribe to OpenIndiana Ninja aggregator