Tuesday, August 24, 2010

Can patents and intellectual property rights put a deadlock on the information society?

What is information? Or what does information make us capable of doing? Information put into context is knowledge. Information exchange is the basis for empiric evolvment and great inventions. Without relatively free flow of information society can not evolve and prosper. Historically things would have turned out pretty differently of the alphabet was patented, or mathematics was protected as intellectual property.

During listening to a podcast interview with Robert Laughlin, that (I thought was only) about carbon future and climate, the talk also mentioned the topic of patents and intellectual property rights (at 50:35). Protection of informaton has restricted US to some extent to empirically evolve, and jobs is outsourced to e.g. Japan and now China. The patents is kept by American companies, but production is not in the US. It is however unclear how this affects employment and innovation in the long run, but there is a high probability that a connection is present. It started my thinking on how such protective measures affects our society. Laughlin mentions a book he has authored: Crime of Reason that rationalizes over this subject.

Just think how some cities and whole nations became recognized mariners in the era of sailships. By sharing knowledge, and demolishing the churchs false demagogy saying the world was flat, they conquered the earth. Little or no knowledge was patented before industrialization. At least not commodity knowledge.

The invention of the internet has let loose massive flows of information. Our society and daily lives is packed with technology. Information technology is ubiquitous and indespensable in the parts of the world calling themselves information societies. What disturbs me is that the tools we are so dependant on is illegal to tinker with to an increasing degree. Apple is the forefront of this development, but they are not alone. Given that a lot of smart people, buying products, sees ways to improve them it is a waste of talent not to let them. The knowledge of how the tools that we depend upon works should be available. Reverse engineering should not be am act of crime. The products themselves is just as valuable with the knowledge available, if not even more. When products can be extended in ways the manufacturer did not think of, the usefulness and usablility increases.

This is especially true for software, that increasingly becomes the inner workings of our tools. Did you know that the average car has software with over 10 million lines of code? How many knows how that code works, opposed to traditional home mechanic doing maintenance works on his own car? Recently it has been shown that wireless pressure sensors are vulnerable for malicous hacker attacks. Patents can not protect you from criminals, but people with good intentions (and I am fairly convinced they outnumber criminals) could reveals such things. The most capable could even provided fixes. Software should not be patented. The value is in goods that can be traded and valueadding services using the knowledge. Using information correctly is complex (instantiation of knowledge) and will always be in demand.

My point with arguing that knowledge about how our tools works is that this is how it has been most of the time during our civilization, and probably before that too. When there are too many patents and intellectual properties protected from reuse and tinkering, the information society may be deadlocked. If this is true, it is a slow process. It is like the story of boiled frogs, that do not recognize that their beein boiled when let into cold water slowly heating up.

The issues of protecting knowledge may eventually restrict desired and needed innovation. In the context of the interview with Laughlin, one can derive that it slows down or prohibit much needed concensus over what environmental challenges we are really facing and how they can be solved. To make it clear where I am going with this: Environmental challenges are global, the internet is made for global information exchange. The tools we use, that largely is the cause of (our perceived) environmental challenges, are protected from tinkering. Information protection and patents is not helping us in figuring what we have to do.

Saturday, August 14, 2010

Has Oracle killed innovation on the Java platform

Most of the innovation the software world is derivative works. The whole software innovation ecosystem is empiric, and new offsprings sees the daylight with knowledge originating from other successes and failures. Further, a lot of innovative products is based on commodity software, that saves innovators from the tedious and costly work of doing everything themselves. If these mechanisms breaks apart, the whole innovation ecosystem in the software world would crumble. I think this article explains how innovation happens today, and it lists som famous everyday innovations that where never patented. Things everyone of us uses almost everyday. Had they been patented, some things would be quite differently today.

So, Oracle sues Google over some patent infringments and IP rights. Because they own Java they can do just that. Google has been clever omitting these patents and IP rights and Sun did not see any interest in pursuing any possible infringements in court. I guess they saw Googles works as strengtening Javas overall position, even if the code was not portable, but knowledge is. And knowledge is very easily transferrable. Further, it can result in new offsprings and innovations.

The Java Virtual Machine is a commodity that a lot of business and open source projects relies upon. The desktop and server editions are open source under GPLv2. The mobile edition is not fully open source, and it is here that the Oracle lawyers (hyeanas are more appropriate me thinks) are seeing legal meat to dig into. How they axactly will argue is not yet revealed, but it could be they will argue lost business. I would say it is downright unethical to sue a another party on such basis. Just as Apples iPhone would not have had Androids market share, if it did not exist? There are no other real competitors to Apple just now, and is Oracle producing phones? Can they show a prototype? Can Oracle provide any proof that Java ME has lost any market share (as far as I know Java ME has not played any significant role in a market sense)? They are just hurting the JVM and Java language as a commodity by inserting insecurity and fright into the ecosystem. Maybe this will be the event that triggers completely new programming languages or strengthenes some new ones in the pipeline? Languages with absolutely no patent strings attached and potential misuse by the "owner".

The consequences could be devastating for innovation on the Java platform. Those who want to use Java the platform must from now on be extremely careful to not irritate Oracles lawyers. What does the Scala people think now? Will Oracle try to sue them for lost revenues on Java IDE's or do some ridicoulous changes to stop Scala?

In addition to damage innovation on the Java platform, this lawsuit will drain energy and time from managamenet at Google and Oracle. Who gains on that? Yes Apple and Microsoft. While Oracle bashes Google with stone age business models, competitors can exploit their distraction from the scene where innovation happen.

What feels so completly wrong with this lawsuit, apart from totally missing the point of the software industry works, is that Google has probably been one of the biggest contributors to the diffusion of Java language on the mobile platform, namele Android. Android provides no or little revenue for Google, but sees at as mere innovation and business platform that suits them. I think Oracle will have a hard time arguing for any economic losses as Google does not make money on Android directly and Oracle is certainly not in the ad-business.

Friday 13th  , August, 2010 is a sad day in software history, and James Gosling did foresee it in the aquiring negotiations with Oracle. He must have felt like Albert in We, the drowned by Carsten Jensen. Albert can see war victims beeing killed in his dreams before it actually happen.

Update 18.08.2010
Charles Nutter has written a thorough blogpost on the issue.

Wednesday, May 12, 2010

Check your brakes

This week I had several almost-accidents when riding my bike because my brakes was worn out. For every incident I got more careful and lowered my speed.

I changed my brakes, and I can tell you, it was a revelation. I could go much faster, and could stop almost instantly when required. The effects are only positive: I can go faster, safer and it is more fun. The safer part means I don't risk hurting myself and others. Good brakes kind of set you free.

During my first ride with the new brakes it struck me that this has analogies to many professional life as a programmer. When programming it is very easy to assume you have understood requirements, the technology I am using and all kinds of other assumptions.

Developers, projects and organizations should integrate feedback and brakes in their work. Good brakes is useless without proper signal to use them.

A brake in programming can be incarnated in several forms, which I will try to show here.

A programmer should listen to all feedback provided from compiler warnings, automatic- and peer code reviews. When struggling to name a software artifact, stop coding and take a break. Find a colleague that can act as a rubberduck or ask for advice. Do pair programming.

Unit tests provide concrete signal (when done adequately), and will in some ways act as a brake. They can let you refactor, delete and improve code more freely. A good test can also stop you from implement plain wrong functionality when you can not satisfy correct asserts in any way.

Likewise performance tests, security reviews, coding guidelines etc can act as signal to brake when things are going in the wrong direction. The more you can automate of these things, the faster it will let you go.

I think the principle of integrating feedback and brakes in all work done in an organization is valuable. For software projects frequent releases will provide feedback, and tell you occasionaly to clutch the brakes. Open discussions in the project can reveal bottlenecks and inadequate ways of doing things. No topic should be avoided, as avoidance can hide potential problems.

Requirements must be asserted too with prototypes and design can be explored and refined with CRC

A promising new technique that is beeing developed for decision making in software architecture can provide a much tighter feedback, brake and correction loop. I think the boardgame approach can be adopted for a lot of complex decision making processes.

A Policy Advisory Board (PAB) will be responsible for overseeing obedience of common rules and update outdated rules. At the organizational level the same yields as for projects regarding open discussion. Let people discuss freely, use wikis, and microblogging to unlock knowledge about inefficiencies.

The key observation of brakes, is not only the slowing down aspect. It is also about changing direction. Changing direction at high speeds may be impossible, risky or it has to be done with a large turn radius. Slowing down for a moment improves your quickness.

Now after a lot of talk about brakes it is important to state brakes are essentially a hook into work processes. These hooks should let people intervene to change direction. The brakes does not provide the required decisions that must be made, just an opportunity to make them before too much damage is done. They also provide excellent opportunities to learn. Failures is valuable learning and is an integral part of an learning organization. Learning is, amongst other things, effectively brake checking and the loop is complete.

Sunday, April 4, 2010

Can tech obstruct your fundamentals?

This Easter I came across this blog post about The Value Of Fundamentals (highly recommended reading), and it made me think about how and why we adopt new technology. (Maybe I am fond by this way of thinking because I was a martial arts practitioner myself, and highly respect them still.) In the adoption process the problem it is supposed to solve gets lost, and it is used everywhere as a one-solution-fits-all technology. Fundamentals is not as well understood as they should be in many professions, and software development is no exception.

Technology can prevent you from using your basic skills, as "advanced" technology may look like it can solve several of your problems at once. This is sometimes true, but more often just smoke and mirrors. In addition technology brings on a new set of problems (that you will try to solve with workarounds), and sometimes they become a hammer that you'll try to apply to any problem you stumble upon.

This occurs frequently in software development, but it happens elsewhere too, often with computer technology that is supposed to solve multiple problems. Way too often this obstructs the use of basic skills to solve problems. The challenges here can be mapped to other professions, e.g. physicians on large hospitals relying too much on all the available technology rather their basic skills.

Software development has always been ridden by the Silver Bullet Syndrome, and Silver bullets often infers technology that in effect makes us rely on factors out of control of the individual, project, and most important the stakeholders. But it is not only silver bullet technologies that can make you loose sight of real simple and elegant solutions to the most important problems you are trying to solve. Even wellproven and widely used technology may obstruct the view of the problems.

Here are some examples of categories of technologies that often obfuscates simple solutions:
  • Frameworks
  • Code generators
  • Integration technology, e.g. OR-mapping
Most of these adds complexity to the solutiuon (but promises an overall general complexity reduction), as they often are very general solutions trying to solve multiple problems. Solving complex problems by adding multiple frameworks to the solution make a very complex solution. This is where fundamentals becomes valuable, and decisions on whether to use a framework should be balanced with using basic skills.

When basic skills are forgotten or not practiced often you rely too much on technology to solve your problems. What happens then is that socalled advanced technology is applied to simple problems, that should be solved applying fundamentals.

What is really bad about using technology instead of basic skills in a software development context is that technology must be maintained throughout a products lifetime. When a technology vendor stops supporting a certain technology, all products relaying on it implicitly receive a death sentence. Products implemented mostly using basic skills has a better survival rate, as it often can be ported to new platforms where technologies on the legacy platform is unavailable. It can be very complex to update 3rd party frameworks if they are interdependant, or the code is invaded by the used technology. The maintenance cost can rise, and if ignored create serious technical debt.

Additionally well written software brings on best practices to newcomers reading the code, and when a skill/practice is questioned it triggers lot more valuable discussions than whether to throw out/introduce some technology. Evolving basic skills is a whole lot more valuable than decisions on the latest fad. Evolving basic programming skills empowers the individual , project, organization and profession. Technology brings on a substantial number of abbrevitions that is impossible to communication with users and stakeholders. Independant of what technologies are involved in your project, keep it out of non-technical discussions. Users don't care, and is certainly not impressed. Software that works, and that evolves with the users impresses. Wellcrafted software is easier to evolve in the long run, rather than fighting with framework/code generators.

So what fundamentals should all software developers master? I can easily come up with this shortlist, but it is not comprehensive or prioritized in any way:
  • Central design patterns
  • Know at least 2-3 programming languages, and 1-2 of these deeply and you should be acquainted with a scripting language.
  • Create readable and maintainable code
  • What is robust code?
  • Important concurrency concepts
  • Coupling theory
So this is a post that promotes the Not-invented-here syndrome? Not at all! Just let programming fundamentals weigh in heavier against e.g. frameworks and code generators. When cost and/or time constraints favor use of these, look at the source code (not only the documentation) and evaluate amongst other things:
  • it is well written
  • will not invade your code
  • not get in the way of creating elegant solutions to your problems.
As a pleasant sideeffect you might even learn a technique you are currently unaware of. When you understand how it is written it will be far easier to communicate with the vendor if changes must be made.

When you think about it, technologies come and go but the software industry in general changes at a much slower pace. It is time to shift this balance from praising the latest fads towards improving basic skills in the software development profession. This will make both the software itself, and the programmers, more valuable. Investing in fundamentals should also be incorporated in organizations' learning circle. Finally programming more consciusly with basic skills will contribute to less monocoltures in software, which in my opinion results healthier software.

Wednesday, March 3, 2010

Stone age business models

It is sad to observere that patentbased lawsuits against competitors seems to be part of major mobile manufacturers business strategy these days. They have started to bang each other in the head using lawyers equipped with patents.

Nokia sues Apple, who sues HTC on UI technology. They've started to dig trenches instead of trying to beat the competion by innovating. High profile lawsuits like this require a lot of attention from the companies leaders, that distract them from making real business decisions. Especially Apple seems to have already forgotten about how fast they've recently grown in the smartphone market, only because they where innovative (and maybe already had a cool reputation). Well these times seems to have passed, and they've started to protect their innovations.

What Apple and Nokia is forgetting here is that this will force competion to do something uniquely new. Somewhere someone will come up with innovations making the iPhone look outdated. Innovation in this space often happens outside the big corporations, and this is becoming particularly true nowadays. The cloud offer startups vast computing resources, open source provide building blocks to start with and finally social media gives rapid feedback. No patent can protect an investment from this. See more on this here https://sites.google.com/a/webstep.no/openinnovation/Home/news-about-open-innovation/guykawasakioninnovationandthemythoflightningboltinspiration

I think patents in software is an anomaly that must be burried and forgotten. They can not help protect software investments, and only gives the patent grantees a false feeling of safety.

This week a new way of doing UI, Skinput, was presented, and that from the patent borg in Redmond, or at least a Microsoft driven University. I guess this will be patented too since it is a Microsoft-led innovation, but nevertheless it comes from a "unexpected" source, as Microsoft has lately been accused of not innovating much

Update: Very interesting podcast on the Techrepublic on this subject

Update 12.03.2010: Some very interesting and relevant blogposts:
The New Paradigm of Advantage and Jonathan Schwartz on Patent Litigation

Saturday, February 20, 2010

Major- and Minor Tyrannies in software

Serviceorientated Architecture (SOA) can enable/support redesign of business processes that can help organizations tap more of their potential or even providing uniquely new products and services.

But there are som pitfalls that might show up later in the process as obstacles, caused by lack of/forced- or accidental decisions. The pitfalls I will elaborate on here are those that dictate projects to use inapropriate technologies for the problem they are supposed to solve. I call these majority and minority software architecture tyrannies. Software architects must spot these and handle them properly.

Majorities
Majorities often force inappopriate solutions on other projects in an organization, disquising this as standard solutions to be used. This eliminates good and qualified decisions in projects that is doing something different than previous project. Doing something different is the norm in software projects, as it often is part of renewal/change in business/technology.

Such standards is often invasive and impossible, or at least very hard, to change later.
Majority tyrannies must be met with knowledge about better alternatives, and how they can contribute to better supporting requirements and business strategy. In cases where standard noninvasive technology is forced, an isolation layer can be introduced to prevent unwanted dependency diffusion into architecture and code.

Standards often appears as a relieve, liberating the project architect from making decisions. Not staying alert can prove fatal to the project later on.

Minorities
Survivor projects clinging to old or inappropriate technology can prevent others from moving on to better or more suitable technologies. Measures dependant projects can take is to create a Integration Anti Corruption Layer (orginal definition from DDD Anti Corruption) layer, so it will be easier to replace later.

A subset of minority tyranny cases is lack of proper versioning of dependencies, making releases of different projects interdependant. In SOA this is amplified and has become a runtime challenge, as opposed to earlier where this was primarily a build time problem. Several versions of shared services (and components) must be supported simultaneously to enable independent and smooth releases. In SOA the most flexible way of version handling is using the Evolving Endpoint pattern.

Consequences and how to deal with this
The concequence of making wrong/forcing/avoiding architecture decisions is high complexity in release management and ineffective software for supporting business processes. Since both organizations and software technology changes continously, decisions can not be be written on stone tablets. Previous decisions must be challenged, and discarded as they are
a) proven wrong
b) outdated
c) proved unecessary

This is part of SOA Governance and must be handled by a Center Of Excellence or Policy Advisory Board

Especially majorites often lead to Architectural monocultures, which is bad both seen from innovation (evolution) and security point of views. These are strong motivators for evaluating and make architectural decisions based on business requirements rather than what has worked before.

This post may appear as a anti standard manifest, but that is not my intention. Good standards have been reevaluated many many times, and survived these evaluations. This "process" will run the test-of-time on standards and give feedback for refinement. I think it can be viewed as variation of natural selection, where the fittest survive. Natural selection, by the way does, not apply in monocultures. Monocultures can produce odd mutations and eventually they collapse.

On a per service this manifests itself as providing multiple endpoints, that makes it as accesible, usable and flexible as possible. A service consumed by many clients gives it a strong position in the organization(s) using it, and thus it may itself become a well proven standard.