Thursday, October 15, 2015

Is there a place for another IoT device?

It was brought to my attention recently that Mozilla has preparing to launch a new IoT named CHIRIMEN:
CHIRIMEN (source : http://mozopenhard.mozillafactory.org)
Chirimen is controlled by Firefox OS, which means that application are webpages (consisting of html/css/javascript) and hardware bindings are exposed via javascript binding through the gecko engine. The board above is controlled by this code (copied from http://mozopenhard.mozillafactory.org/):
<!doctype html> <html lang="en" dir="ltr"> <head> <title>CHIRIMEN example - Led</title> <script type="text/javascript" src="gpio.js"></script> <script type="text/javascript"> var v = 0; GPIO.getPort(196).then( function(port) { setInterval(toggleLight, 1000, port); } ); function toggleLight(port){ v = v ? 0 : 1; port.write(v); } </script> </head> <body> </body> </html>

Such an approach has some advantages - f.e. it is deadly easy to write an application for such a thing if you are a web developer or full stack developer, and you do not need to be familiar with Linux/Android nor electronics. The entry barrier seems to be incredibly low.

I do not know whether Chirimen will be successful, but I still find it to be a very good project for the Mozilla Foundation. Actually, one of the best bets for future.

You may be not following this, but the Firefox browser is in a decline:
Firefox marketshare decline by Daniel Cardenas (Wikipedia)
This is a big problem, because Mozilla is commited to openness and privacy protection, and therefore it cannot monetize user data. Oh, and it does not have a big company behind it earning money elsewhere.

What is more, competitive browsers are installed on handheld devices because those browser owners successfully promoted mobile operating systems. Firefox cannot innovate fast enough to win user base (all innovations can be easily copied) by Chrome/Safari/Edge.

If this trend continues, Mozilla will cease to exist in 2020.

So Mozilla needs to find a new market. Desktop is lost, mobile is lost (Firefox OS was too late to the market).

But IoT looks attractive. It still has not reached mass production (because it is still to expensive). The breakthrough will happen when someone will be able to deliver to the market devices so cheap it will be possible to put them everywhere (I think it is < 1$ price point).

Nobody can tell when it will happen exactly (as innovation is unpredictable). But it may happen relatively soon (before 2020). IoT in every home, in every piece of clothing or even pens. Gazillions of devices.

Now imagine that Mozilla earns 5% of market right now. Or even 1%.  1% of gazillion is still a huge potential. This is the true reason why I find this device to be a good move. Mozilla needs to do three things:
  • work on this product with users and improve it constantly
  • survive 5 years and build know how
  • when < 1$ IoT device enters the market, attack it with full speed.
I would be very careful with saying that the IoT market is full. Chirimen may have a lot more sense that it appears to.

Disclaimer: It is a strong opinion, weakly held. I'd love to be at Mozilla right now and have my analysis verified.

Monday, April 27, 2015

Call 4 papers - where is an open source app?

I was recently submitting a number of conference proposal related to my current area of interest, and one thing that struck me was the lack of a rock-solid, easy to use call-4-paper application.

Each time I wanted to propose a talk to a conference I had to create a profile, confirm e-mail address, provide a lot of details and only then I was allowed to fill actual talk details.

"Don't reinvent the wheel!" they say.

So, why, WHY each conference organizers decide to write their OWN c4p application? That makes no sense.

The internet proposal submission is no longer a feature that will distinguish your conference.

That's why I am starting my new, open source application for accepting conference talks. It will not be fancy, but it will work.

Clone it, run it, modify it, USE it.

Contribute if you wish.






Saturday, August 23, 2014

How to render image serverside using OpenShift Node.js?

Hey everyone,

Leaving a developer world in favor of non-coding job has certain advantages.
And one big disadvantage - no more coding. That is a big problem and I lasted about 6 months before I started created something after hours - just for my pleasure.

Things have changed significantly since I was studying. Back then it was a big problem (especially for a poor student) to rent a server and static IP address. And today, with the presence of Weebly and Openshift (a really great Platform As A Service) you can get up your small portal (like mine) in less than a couple of hourse and 100$.

Openshift provides a great array of technologies - I have chosen Node.js + MongoDB as it offers the quickest route to a working app.

One big problem I had was that I wanted to render images server side. This is not trivial as there is no javascript libraries that could do that. One can rely only on native software, but if you consider semi-automated PaaS and customer server-side solution, you will feel the pain.

So I'm sharing with you my setup, as I got it working - it renders images using node.js canvas module, that uses underlying Cairo library.

Installing Cairo on OpenShift gear is not so simple - it is necessary to build cairo manually:

export PATH=/sbin:$PATH
export LD_LIBRARY_PATH=$OPENSHIFT_DATA_DIR/usr/local/lib:/opt/rh/nodejs010/root/usr/lib64:$LD_LIBRARY_PATH
export PKG_CONFIG_PATH=$OPENSHIFT_DATA_DIR/usr/local/lib/pkgconfig

cd $OPENSHIFT_DATA_DIR
curl -L http://sourceforge.net/projects/libpng/files/libpng15/1.5.18/libpng-1.5.18.tar.xz/download -o libpng.tar.xz
tar -Jxf libpng.tar.xz && cd libpng-1.5.18/
./configure --prefix=$OPENSHIFT_DATA_DIR/usr/local
make 
make install

cd $OPENSHIFT_DATA_DIR
curl http://www.ijg.org/files/jpegsrc.v8d.tar.gz -o jpegsrc.tar.gz
tar -zxf jpegsrc.tar.gz && cd jpeg-8d/
./configure --disable-dependency-tracking --prefix=$OPENSHIFT_DATA_DIR/usr/local  
make
make install

cd $OPENSHIFT_DATA_DIR
curl http://www.cairographics.org/releases/pixman-0.28.2.tar.gz -o pixman.tar.gz  
tar -zxf pixman.tar.gz && cd pixman-0.28.2/  
./configure --prefix=$OPENSHIFT_DATA_DIR/usr/local   
make 
make install

cd $OPENSHIFT_DATA_DIR
curl http://public.p-knowledge.co.jp/Savannah-nongnu-mirror//freetype/freetype-2.4.11.tar.gz -o freetype.tar.gz
tar -zxf freetype.tar.gz && cd freetype-2.4.11/  
./configure --prefix=$OPENSHIFT_DATA_DIR/usr/local   
make 
make install 

cd $OPENSHIFT_DATA_DIR
curl http://cairographics.org/releases/cairo-1.12.14.tar.xz -o cairo.tar.xz  
tar -xJf cairo.tar.xz && cd cairo-1.12.14/  
./configure --disable-dependency-tracking --without-x --prefix=$OPENSHIFT_DATA_DIR/usr/local 
make 
make install

After that it is necessary to hack package.json file - it cannot contain canvas in it, as paths are wrong, and canvas will never discover cairo in non-standard location. Canvas has to be installed using build hook:

export PATH=/sbin:$PATH
export LD_LIBRARY_PATH=$OPENSHIFT_DATA_DIR/usr/local/lib:/opt/rh/nodejs010/root/usr/lib64:$LD_LIBRARY_PATH
export PKG_CONFIG_PATH=$OPENSHIFT_DATA_DIR/usr/local/lib/pkgconfig

cd $OPENSHIFT_REPO_DIR
scl enable nodejs010 v8314 'npm install canvas'    

And a really dirty hack. OpenShift tends to cache installed modules, which is a bit of a problem, because when scripts are restored, they know nothing about where the cairo is installed. This simple pre_build hook is reverse engineered from the OpenShift code - I remove cached modules, and therefore force canvas installation by my script.

rm -rf "${OPENSHIFT_NODEJS_DIR}/tmp/saved.node_modules"

I hope this will help someone trying to run canvas module in OpenShift - it took a couple of long hours to figure out what is going on.

Wednesday, December 18, 2013

P2 - retrospection

My work towards getting P2-RPM integration is leaning towards an end.  Patches are in gerrit, here and here. While Eclipse with the patches is being built, I have plenty of time to think.

I've been with P2 since it was announced at EclipseCon 2008 in Santa Clara. It's hard to remember how crowded was the room, but I remember my sheer enthusiasm to the idea of solving the satisfiability problem, and runnig the optimal set of bundles.

Little had I know how much influence it will have on me. But P2, and especially dropins, were supposed to be my daily bread for the next couple of years.

Then I changed the job, hoping for something new, but P2 did not let me to forget. Fedora turned out to rely on dropins (because there was *no* alternative). I tried to change Fedora, but was confronted with 20+ years of Linux releng, and got really convinced that P2 could have been quite different.

So, how the P2 could have look like now?
The core P2 functionality, responsible for installing things, should *not* be running in JVM. Java itself is not granted on every computer, and it has plenty of dependencies (at least on Linux), which makes it very unwanted member of the installer stack. Not to mention that it adds size to the installer, which is really important in the case of corporate setups and people using modems. Another problem is that P2 running in JVM cannot update that JVM because of locking. Not to mention the lack of elevated privileges support.

I can easily imagine P2 being a native, embeddable processing library that would rather work take the state of a system, the request, and respond with steps leading to another state, but without really touching anything, but just letting the installer to deal with file operations.

I'd be the first to work on it to integrate it fully into RPM and make it responsible for the whole Linux installation - not just Eclipse.

Disclaimer: Don't be afraid. Those are only my thougths, which (un)luckily I'm not able to materialize without making many people upset.


Friday, October 25, 2013

Google Talk plugin presence breaks Eclipse in Fedora 20.

This is the kind of news I really hate to announce, and at the same time, this is the reason why I'm addicted to open source. With open-source I could report it. Investigate it, or even workaround it. But the only thing I can do with a binary plugin is to remove it.

Symptoms:
Eclipse 4.3.1 in Fedora 20 crashes shortly after content assist or javadoc is shown. ABRT discovers a crash.

Reason:
Both content assist and javadoc are browser-based. Google Talk plugin interferes with webkit/SWT, and in the end JVM crashes.

Workaround:
Remove Google Talk plugin.
yum -y remove google-talkplugin 

Thursday, October 24, 2013

Enabling Tycho tests for P2 - lessons learned

Today's morning, after turning on Eclipse, I got this notification:


I find it to be a big step forward (at least to me), because from now on, all P2 patches pushed to gerrit will be automatically verified in a quite reasonable time - more or less 2 hours - yes, that's the time that is necessary to run the build and execute all P2 tests.

However, the road to the green build was a rather bumpy one - here is the list of issues that may impact more people doing the migration:

Issue #1 - Error code 23.

First reported as Bug 415489 - tycho-surefire occasionally fails with unexpected return code 23. Then after investigation - duplicate opened by me:
Bug 417430 - tycho-eclipserun may interfere with tycho-surefire OSGi runtime.

Symptoms:
Tycho builds stops with an unexpected error code 23. The build is not failed, it just exits.

Cause:
Tycho occasionally assembles and spins Equinox instances, if it is necessary to run OSGi-based tooling during the build. But in one place, Equinox was refusing to start, and was returning error code 23, demanding to be restarted. The sequence that lead to this was pretty simple:
  • Tycho assembled and run Equinox instance based on Kepler versions to generate API description
  • a bit later, tycho assembled and run another Equinox instance based on Luna-Nightly to run tests, but the configuration directory was not cleaned, so Equinox thought an update was happening, and demanded a restart.
Solutions:
  • Update to Tycho 0.19.0 - the issue has been fixed there
  • Change tycho surefire configuration area to avoid collision with api builder:
    <work>${project.build.directory}/surefireconf</work>

Issue #2 - Different naming schemas.

One of the tests was failing all the time, returning doubled number of artifacts in a generated P2 repository (expected:3, was:6). What happened was that P2 was copying bundles from a running application, and this Surefire application was using different naming scheme: Regular Eclipse apps use following convention: bundleId_version.qualifier.jar, but Tycho Surefire uses bundleId-version.qualifier.jar. Of course, P2 processed those files properly, and generated valid repo, just the test input was wrong.

Issue #3 - Circular dependencies.

P2 Tests, to run properly, require platform specific filesystem bundles. The only way to add those bundles to Surefire is to add parent feature - and since we are building P2, parent feature (org.eclipse.platform) would be resolved from an update site. Well, almost. The parent feature happened to include one bundle from the reactor (org.eclipse.update.configurator I think). So, almost everything was resolved from the update site, except this one bundle, which came from the reactor, and didn't satisfy feature requirements due to the changed version.

The workaround was to use java.io.File api (luckily this is all about test case preparation).

Some discussion concerning this issue seems to be happening in Bug 419201 - "mvn clean verify -Pbuild-individual-bundles" fails for Platform Compare.

Side note:
There is an ongoing effort to enable running tests during the build for particular components, under the umbrella  Bug 416904 - Allow to run tests with tycho-surefire-plugin. "In order to lower entry barrier and execution of unit tests [...]".

Best regards,

Monday, October 21, 2013

Rediscovering Mylyn (Builds)

I always have a mixed feelings when I try to write anything about Mylyn. It's just impossible to cover all its greatness it in one blogpost, and the fact that it is written as an Eclipse add-on is not helping much in promoting it amongst my linux readers (it's all because of this joke:
But let's try. First of all, Mylyn is an excellent tool to keep all your bugs in one place, which is very useful for me, as I very often jump between projects and need to switch between different areas quite fast:

Mylyn Task list - bugs from different sources in one place
But that's just a tip of the ice berg.  The true power of Mylyn is 'context' management. What is a context? Well, it's a set of files you are working with, and Mylyn's ability to track which files are important for certain bug is hard to overestimate when you get a comment like:
Can you include a "description.txt" file (or similar) that describes how to rebuild them, in case it is required in the future?

The context is just one click-away - the only thing you need is to activate your task by clicking the ball next to it:

An active task, a task without context, and an inactive task with recorded context.
So now we go to the main Mylyn functionality, that really let's the entire Eclipse shine - just compare the next two screenshots:



Have you noticed which files are presented in the 'Package Explorer' view? Yes, that's those I really need! No clutter, no list scrolling! One click and you're back to task that you've left a week ago!

But even that's not all. If you have Continuous Integration running on Jenkins/Hudson, you may connect your Eclipse to it, and get this lovely view of your jobs:
Builds view. Notifications included. No more page refreshing.
Can you see the small JUnit action? Guess what does it do :). Yes, you are right - it opens tests in a JUnit view:

Jenkins/Hudson build test result loaded into Eclipse.
And now - once you double click the stack trace - Eclipse will open a file for you - no grepping, finding, searching - everything loaded into your really Integrated Development Environment!

Of course - this is just a small part of Mylyn functionality, this is just what appeals to me most in my daily work. But managers will be happy, too, with all the project-tracking functionality, integrated via OSLC, and really powerful tools (out of scope for this blog).

Quick instructions how to install Mylyn in Fedora (packaged by me):
sudo yum install eclipse-mylyn
Pretty simple - and really worth to try out!