Jekyll2018-06-27T21:22:48+02:00http://www.ruflin.com/RuflinNicolas RuflinLow Cost Meetup Recording2017-07-12T14:00:00+02:002017-07-12T14:00:00+02:00http://www.ruflin.com/2017/07/12/low-cost-meetup-recording<p>Over the years I have run different meetups and one question that popped up from time to time is if the meetup could be recorded. The recording is useful for people who couldn’t attend or who want to watch parts again. I tried different setups and this page represents the setup I’m using at the moment. Should parts change I will update the page.</p>
<h1 id="requirements">Requirements</h1>
<p>My requirements for the setup are as follows:</p>
<ul>
<li>Portable: It must easily fit into my backpack</li>
<li>Support for demo session: Screen recording must be possible</li>
<li>Low cost: It should not cost more than $300</li>
<li>Wireless microphone: It must allow the speaker to move freely</li>
<li>No / minimal post processing: No special skills are needed for the post processing</li>
</ul>
<h1 id="setup">Setup</h1>
<p>First a quick overview of my setup to see the hardware and software that I use. I will go into details later on why I picked these specific items:</p>
<ul>
<li>Video Recording: <a href="https://www.logitech.com/en-us/product/c922-pro-stream-webcam">Logitech C922 Pro Stream</a></li>
<li>Speaker Screen Recording: <a href="https://www.araelium.com/screenflick">Screenflick</a></li>
<li>Speaker Audio Recording: <a href="http://www.samsontech.com/samson/products/wireless-systems/xpd-series/stagexpd1hs5/">Stage XPD1 Headset</a></li>
<li>Camera Stand: <a href="http://joby.com/gorillapod/gorillapod-magnetic">Magnetic Gorilla Pod</a></li>
<li>Recording Device: Speaker Computer with Screenflick (Mac with 2 USB ports required)</li>
</ul>
<p>Here is an example of a recording where Tyler Hannan is talking about a deep dive into Elastic Machine Learning:</p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/dPAVyv0u-40?list=PLZV6l8nwCLZiLeZ8c70STJFL1TcHG7u50" frameborder="0" allowfullscreen=""></iframe>
<h1 id="how-to-record">How to record</h1>
<p>To be able to record the screen of the presenter, they must install Screenflick in advance. Before the presentations start I setup the external camera so that the speaker is in the center and the slides are in the background (if possible). As the camera stand is magnetic, it can be attached to lots of different places or grab to a window handle (see image below). Keep in mind when setting it up that the speaker may walk around or sit down / stand up during his presentation. The main limitations of setting up the external camera is the length of the cable. I always bring some tape to the meetups so I could also attach it to the wall if needed. Below is an image of one of the meetups where I attached the camera to a window handle:</p>
<p><img src="/images/camera-setup.jpg" alt="Camera Setup" /></p>
<p>After having set up the external camera, I attach it to the presenter computer. The same goes for the microphone receiver. I turn the microphone on and start up Screenflick. Screenflick must be configured to record the screen, the external camera and the audio from the wireless microphone.</p>
<p>A few seconds before the talk starts, I let the speaker start the recording with Screenflick. After the talk, the speaker stops the recording. After the meetup, I ask to speaker to share the recording. For this he can right click on the recording in Screenflick and show it in the Finder. From there he can send it with your preferred way to your computer.</p>
<h1 id="exporting-processing-and-publishing">Exporting, Processing and Publishing</h1>
<p>As soon as you have the Screenflick file, it can be opened in Screenflick. Choose the best parameters to combine the recording of the presented with the recording of the screen. Depending on the presentations some different edges can be better to put the image in.</p>
<p>After exporting the video, I cut off the beginning and the end of the video either in QuickTime locally or directly online in the YouTube Video Editor. There you can also enhance the audio if needed.</p>
<p>To publish the videos I use a public <a href="https://www.youtube.com/playlist?list=PLZV6l8nwCLZiLeZ8c70STJFL1TcHG7u50">YouTube Playlist</a>.</p>
<h1 id="limitations">Limitations</h1>
<p>The current setup has 2 major limitations:</p>
<ul>
<li>It only works if the presenter has a Mac</li>
<li>Recording of the Q&A is tricky and always requires the speaker to repeat the questions</li>
</ul>
<p>As a solution I leave the Q&A part out of the video, but it becomes tricky if there are lots of questions during the talk. For Windows or Linux machines so far I didn’t find a software that is similar to Screenflick. Best would be to have Screenflick for these platforms too.</p>
<p>Some workarounds for the above problems can be found in the alternative setups.</p>
<h1 id="alternative-setups">Alternative Setups</h1>
<p>Over the last years I experimented with different setups. As the above setup might not work for everyone, I want to quickly elaborate on other setups that I tried.</p>
<h2 id="one-phone-setup">One Phone Setup</h2>
<p>For some time I only used my own phone for the recording of the meetups. A gorilla stand was used to mount the phone, the <a href="http://www.samsontech.com/samson/products/wireless-systems/xpd-series/stagexpd1lm5/">Stage XPD1 Presentation</a> was used for the recording and it was connected to the iPhone through the <a href="https://www.apple.com/shop/product/MD821AM/A/lightning-to-usb-camera-adapter">Lightning to USB Camera Adapter</a> to the iPhone. For the recording on the phone I used <a href="https://itunes.apple.com/us/app/moviepro-video-recorder-with-limitless-options/id547101144?mt=8">MoviePro</a>.</p>
<p>This setup works well if no screen has to be recorded and an iPhone is available. I haven’t found any good options for Android. Also make sure to attach a battery pack to your phone in case you need to record for longer periods. After the recording, the files can be transferred to the computer using iTunes.</p>
<h2 id="two-phone-setup">Two Phone Setup</h2>
<p>Before I discovered the Stage XPD1 microphone which can be connected to an iPhone, I used two phones for the recording. On one of the phones I recorded the video (the one with the better camera), the other one I connected a Lavalier microphone, started the recording and the speaker put the phone in his pocket.</p>
<p>This setup works pretty well if two phones are available. Especially for the audio recording, an old phone can be used. The main downside of this setup is that afterwards audio and video have to be combined, which can be painful if you don’t know much about video post processing. To make the syncing easier I normally asked the speaker to clap before they started talking, so combining the two feeds became simpler.</p>
<h1 id="decision-background-for-hardware-and-software">Decision Background for Hardware and Software</h1>
<p>Before I settled on the software and hardware I described above, I tried out different software. In the following I want to quickly describe why I picked the above software over other tools.</p>
<h2 id="video--audio-recording-on-mac">Video & Audio Recording on Mac</h2>
<p>For the video and audio recording on a Mac I picked <a href="https://www.araelium.com/screenflick">Screenflick</a>. There are free alternatives available like QuickTime which can also do screen recording. The biggest issues with QuickTime and other free tools was the resource usage. Recording a retina screen keeps the CPU busy on 70-80% which is not acceptable if a presenter wants to do a demo at the same time. Screenflick only uses a fraction of the CPU. The main reason seems to be that Screenflick stores the video in raw format and the processing has to be done afterwards, in contrast to for example QuickTime.</p>
<p>The other reason for Screenflick is that it can record the screen and a camera at the same time plus recording audio from an external source. All of this is stored separately, which allows you to export the videos and audio combined in the best way for each presentation. It seems <a href="https://www.telestream.net/screenflow/overview.htm">Screenflow</a> also has all these capabilities and many more. I sticked with Screenflick as it’s cheaper and does what I need.</p>
<h2 id="video-camera-hardware">Video Camera Hardware</h2>
<p>For the video camera hardware I ended up with the <a href="https://www.logitech.com/en-us/product/c922-pro-stream-webcam">Logitech C922 Pro Stream</a>. I did several recordings with different phones and especially newer phones gave pretty good results. One main issue often was if the slides behind the speaker were bright compared to the speaker itself. The other part was about the angle of the recording if the speaker was waking around. The Logitech has a wide angle recording and it seems to deal well also in low light environments. It probably has a lower resolution than lots of phones nowadays but so far this has not become an issue. One nice benefit of the Logitech is that it can be installed on any stand or also used with a Pod to attach it to a wall.</p>
<p>One downside of the Logitech is that it needs to be attached through a cable to the computer, which limits its placement and it seems there is no option (anymore?) to flip the image if the camera is attached to the ceiling.</p>
<h2 id="microphone">Microphone</h2>
<p>For the microphones it was always important that they are wireless. So far my experience with the <a href="http://www.samsontech.com/samson/products/wireless-systems/xpd-series/stagexpd1hs5/">Stage XPD1 Headset</a> and the <a href="http://www.samsontech.com/samson/products/wireless-systems/xpd-series/stagexpd1lm5/">Stage XPD1 Presentation</a> is especially as they work just off standard USB, have a normal battery and also can be connected to phone. I tried the Presentation and the Headset of the XPD1. The main issue with the Presentation is if someone has “noisy” clothes. The disadvantage of the headset is that it’s clumsy to transport.</p>
<p>At one meetup where the microphone was not available we tried the <a href="http://www.mxlmics.com/microphones/web-conferencing/AC-404/">MXL AC-404</a> which gave quite a good result even though the speaker was walking around. The benefit is that it can also be attached to the computer instead of the speaker itself.</p>
<p>A common problem related to microphones is questions form the audience. So far, I haven’t found a solution for this particular problem. That the speaker always repeats the question often gets forgotten and also for questions that happen in the middle of the presentation, it doesn’t always work. So far I just excluded the Q&A in the end from the recording.</p>
<h1 id="improvements">Improvements</h1>
<p>The above is only the setup I use right now. Please let me know over Twitter <a href="https://twitter.com/ruflin">@ruflin</a> if you have ideas on how to improve it or change it. Or do you have a setup that also works with Windows machines? I will keep this page up-to-date with the most recent development.</p>Over the years I have run different meetups and one question that popped up from time to time is if the meetup could be recorded. The recording is useful for people who couldn’t attend or who want to watch parts again. I tried different setups and this page represents the setup I’m using at the moment. Should parts change I will update the page.Fix for Travis CI failure in forked Golang Repositories2015-08-13T09:19:37+02:002015-08-13T18:00:00+02:00http://www.ruflin.com/2015/08/13/fix-for-travis-ci-failure-in-forked-golang-repositories<p>If you have an open source <a href="https://golang.org/">Golang</a> project on <a href="https://github.com/">Github</a> and the project is forked, the <a href="https://travis-ci.org/">Travis CI</a> build will no longer work for the forked project. The reason is that Travis fetches the Golang project and puts it inside the Gopath according to your repository path. In my case, I forked the project <code>github.com/elastic/packetbeat</code> to <code>github.com/ruflin/packetbeat</code>. On my local setup I still put it under the path <code>$GOPATH/src/github.com/elastic/packetbeat</code> as this is where the package is expected by Golang. But as Travis pulls the project from my repository, it puts it into <code>$GOPATH/src/github.com/ruflin/packetbeat</code>. To solve this issue, the following code must be added to your <code>.travis.yml</code> file:</p>
<pre><code class="language-shell">before_install:
- mkdir -p $HOME/gopath/src/github.com/elastic/packetbeat
- rsync -az ${TRAVIS_BUILD_DIR}/ $HOME/gopath/src/github.com/elastic/packetbeat/
- export TRAVIS_BUILD_DIR=$HOME/gopath/src/github.com/elastic/packetbeat
- cd $HOME/gopath/src/github.com/elastic/packetbeat
</code></pre>
<p>These are the exact same commands Travis uses to set up the project, but with the changed path. This means Travis sets up the project twice, but since all of these commands are very fast, this is not an issue.</p>
<p>It is likely that there is an even simpler way to do this, such as overwriting where Travis should pull the directory, but so far I haven’t found such a solution. If you use a different solution, please mention it in the comments so that I can update the post.</p>If you have an open source Golang project on Github and the project is forked, the Travis CI build will no longer work for the forked project. The reason is that Travis fetches the Golang project and puts it inside the Gopath according to your repository path. In my case, I forked the project github.com/elastic/packetbeat to github.com/ruflin/packetbeat. On my local setup I still put it under the path $GOPATH/src/github.com/elastic/packetbeat as this is where the package is expected by Golang. But as Travis pulls the project from my repository, it puts it into $GOPATH/src/github.com/ruflin/packetbeat. To solve this issue, the following code must be added to your .travis.yml file:Measuring your software quality standards must be easy2014-06-13T09:19:37+02:002015-07-13T10:00:00+02:00http://www.ruflin.com/2014/06/13/measuring-your-software-quality-standards-must-be-easy<p>Quality in Software projects is important, as already elaborated upon in the previous post, <a href="/2014/06/10/quality-in-software-projects-matters-from-day-one/">it matters from day one</a>. But as long as measuring and running quality metrics isn’t easy, it will be hard to follow through with quality.</p>
<p>For almost all programming languages there are tons of tools to run unit tests, check the code complexity or measure the code coverage. The easiest way to get these tools into your project is on day one. The longer you wait, the harder it gets to implement them. There will be dependencies which are not compatible, running the tests doesn’t work because you use a different folder structure and lots of other issues. The tools you choose to measure the quality will also influence the way you structure your project and your code. And I think this is good, as in most cases this leads to better structured code and less dependencies in the code which has lots of other advantages.</p>
<h2 id="setup-and-build-tools">Setup and Build Tools</h2>
<p>Installing the tools is not enough, it must also be easy to execute them and to get the metrics out. Otherwise, it is likely that you won’t use it and in case you share your project, others won’t use it. There are different tools out there such as <a href="http://en.wikipedia.org/wiki/Make_\(software\)">Makefiles</a>, <a href="http://ant.apache.org/">Ant</a>, <a href="http://maven.apache.org/">Maven</a>, <a href="http://gruntjs.com/">Grunt</a> and others, which allow you to automate running your tests or to get the code coverage. Pick the tool which is best for you and your project. For all tools, the execution should be just one command, such as <code>make coverage</code> in order to get the test coverage.</p>
<p>Not only running the tools must be automated, but also the setup of the tools. In case you have different dependencies in your project that are needed to build the test coverage, make sure that the setup of your project is also as simple as <code>make setup</code>. There are also lots of dependency managers out there that automate the installation of the dependencies for you, such as <a href="http://bundler.io/">Bundler</a>, <a href="https://packagist.org/">Packagist</a> or <a href="https://pypi.python.org/pypi/pip">Pip</a>. If you want to make it even nicer, make sure that your build script runs also <code>make setup</code> in case you run <code>make coverage</code> and some depenencies are missing …</p>
<p>To have these simple commands in place will make it very easy for you to run the quality metrics as often as possible. But also whenever new engineers join the project, with just one command they can start contributing to your project. If you don’t have these kind of scripts in place, you will spend a lot of time with every single engineer who joins the project. And in case some dependencies change, others don’t know how to get to the right setup again. This will not only cost you a lot of time, but it will also lead to frustration on all sides. To draw the analogy to building a car: Not having the script is like lending your car to someone else, but before the person is able to start driving, they either need a two-hour crash course from you or they need to read the full manual, because in your car, the accelerator is actually on the left side and the breaks are on the steering wheel. The next time the person will borrow your car, you changed the breaks to the left side and the whole process starts anew. If it is that hard to lend your car, you will not lend it to someone and the same is true for your code. If it is very hard for others to work with your code, you will not share it and they will not want to work with it.</p>
<h2 id="run-it-locally">Run it locally</h2>
<p>It is crucial that all your quality measurements can be executed locally. How can you make sure your code is good enough if you can’t test it locally? When you have to commit your code to a build server first, which runs all the tasks and gives you feedback after several minutes / hours that you actually have one typo in a test, you will stop writing tests, since otherwise it will slow you down. In addition, I think “broken” code should never make it into a shared repository / branch, as otherwise you will not only break your own code, but everyone else’s, and your code will block the whole team. To draw the car analogy again: You implement a new break system in your car and also put it into the car the test team uses. After one week you realise that no test team is left, since all of them died in car accidents due to the breaks that didn’t work. You didn’t expect them to already use the breaks, since they were not finished / tested by yourself, but unfortunately they were testing the accelartor in the car which was ready for testing. Presumably your fellow engineers won’t die because of broken code, but frustration is guaranteed.</p>
<p>Make sure you and your team members can run all the metrics on your local machine and have it as close to the production system as possible. The best is to use a virtual environment with a clean setup. Tools such as <a href="http://www.vagrantup.com/">Vagrant</a> or <a href="http://www.docker.com/">Docker</a> can make your life much easier. Do not run the code on your local machines, since normally every engineer has some special environment variables or tools installed on his computer. This can either get all the quality metrics to pass but then they fail on another system, or it can do the opposite. A simple example here is for all the engineers who develop on OS X. By default, OS X has a case insensitive file system. On the other hand, Linux, which most servers run on, does not. So in case you include the file <code>image.png</code> but it is actually called <code>Image.png</code>, it will probably work on your local machine but not on the production server. Run your code in a very simliar environment to the production in order to make sure it works. And again, make sure the setup of this virtual environment is automated so everyone in your team will use it.</p>
<h2 id="conclusion">Conclusion</h2>
<p>In the optimal case, you would not even have to run your quality metrics. Instead, they would run every time you change your code and immediately give you direct feedback. There are tools such as <a href="https://github.com/mynyml/watchr">watchr</a> which are intended for continuous testing and which run the tests every time you change the code. Having feedback for your code as fast as possible is crucial in order to produce better code. Tests should not slow down your development but speed it up. Make sure that setting up your project and running your quality metrics is fully automated and that it happens as fast as possible. If your tests take a day to execute, no one will ever run the tests except your continuous integration system over night. This would mean the feedback cycle is one day long, which will make sure that you get a maximum of 365 feedbacks per year.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="http://martinfowler.com/articles/continuousIntegration.html">Continuous Integration</a> by <a href="http://www.martinfowler.com/">Martin Fowler</a></li>
<li><a href="https://www.youtube.com/watch?v=n04Xgv9aXg4">Vagrant, Packer, Serf: Maximum Potency DevOps</a> by <a href="https://twitter.com/mitchellh">Mitchell Hashimoto</a></li>
</ul>Quality in Software projects is important, as already elaborated upon in the previous post, it matters from day one. But as long as measuring and running quality metrics isn’t easy, it will be hard to follow through with quality.Quality in software projects matters from day one2014-06-10T20:43:14+02:002014-06-10T20:43:14+02:00http://www.ruflin.com/2014/06/10/quality-in-software-projects-matters-from-day-one<p>As I see <a href="http://en.wikipedia.org/wiki/Software_quality">quality in software</a> as one of the most important features when building software, I will dedicate my next blog posts on software quality and quality standards. For most non-engineers, quality in software projects is something very abstract. I will therefore now and then make comparisons to building a car, since most people are more famliar with cars and objects they can actually touch and feel. My first post will be about why it matters to get quality into your software project from day one.</p>
<p>Quality in software projects is a topic that lots of engineers talk about and also embrace. I rarely witness discussions where engineers argue against the importance of quality. The only situation where the question of quality might be an issue is when talking about building a prototype or writing the first lines of code of a project. Should you have quality standards in your prototype? Should you start your project already with tests?</p>
<h2 id="software-runs-multiple-times">Software runs multiple times</h2>
<p>My answer to this is: If you plan to run your software multiple times, you should. If you build a prototype for example during a hackathon that only needs to run once, perhaps you can build it without including some basic tests. But even in this situation, I belive that having some basic quality measures in your code will prevent nasty surprises during your presentation.</p>
<p>The beautiful thing about software is that it normally not only runs once, but can be executed thousands or millions of times without additional costs (except server costs). If you build software that is only executed once, why would you even build it? That is why quality standards also matter in your prototype. Your prototype will be executed lots of times and will go through various iterations. A prototype will have lower quality standards than a production system. To draw the comparison to building a car here: The first time you build a new car, you will not test 100 times if the blinker works as expected, but you will make sure that the breaks actually work. The same for your prototype, make sure you have at least tested the core functionality.</p>
<h2 id="proto-duction">Proto-duction</h2>
<p>More than once I have witnessed a prototype actually making it into production, the so-called <a href="http://blog.codinghorror.com/new-programming-jargon/">Proto-duction</a>. No, this should not happen, but if you start shipping your prototype and actual real users start to use it, it is very hard to convince the users and your business unit that you have to start from scratch again because you didn’t plan this software to actually be for users. If we make the comparison with the car again, we shippped it to the user and made sure the accelarator actually works as expected, as this is the first things users test, but we didn’t put much thought into how the breaks are applied if the car actually is going down and not on a flat street. Depending on how bad your “break” issue in your sofware is, perhaps it makes sense to take it away from your users again and build it from scratch.</p>
<h2 id="conclusion">Conclusion</h2>
<p>To prevent Proto-Duction issues, get quality in your software projects from day one and make sure your quality is easily measured. More on how to make measuring your quality easy in the next post.</p>As I see quality in software as one of the most important features when building software, I will dedicate my next blog posts on software quality and quality standards. For most non-engineers, quality in software projects is something very abstract. I will therefore now and then make comparisons to building a car, since most people are more famliar with cars and objects they can actually touch and feel. My first post will be about why it matters to get quality into your software project from day one.What HTTP Has To Offer Besides Code 4182014-02-14T21:35:44+01:002014-02-14T21:35:44+01:00http://www.ruflin.com/2014/02/14/why-you-should-know-the-http-status-code-418<p>The internet is an integral part of our daily lives. We would be lost without the internet at work, and we depend upon it to guide us to a good restaurant or hotel on our holidays. None of this would be possible without internet protocols. <a href="http://en.wikipedia.org/wiki/Internet_protocol_suite">TCP/IP</a>, <a href="http://en.wikipedia.org/wiki/Dhcp">DHCP</a>, <a href="http://en.wikipedia.org/wiki/Dns">DNS</a>, <a href="http://en.wikipedia.org/wiki/Http">HTTP</a> and many other internet protocols have been around for the last 20 years. Even more amazingly, they have changed only very little.</p>
<p>The largest part of the “visible” web has always been running on the Hypertext Transfer Protocol known as HTTP. HTTP/1.0 was introduced in May 1996 as <a href="http://tools.ietf.org/html/rfc1945">RFC 1945</a>. A few months later, HTTP/1.1 was released as <a href="http://www.w3.org/Protocols/rfc2616/rfc2616.html">RFC 2616</a>. Since then, the protocol received some minor improvements, but stayed mainly the same.</p>
<p>In a fast-living environment such as the internet, 20 years of staying almost the same feels like several lifetimes. In their impact, protocols are comparable to inventing the wheel; both have been surving their purpose in many different environments and with different technologies. The inventors of HTTP, <a href="http://en.wikipedia.org/wiki/Tim_Berners-Lee">Tim Berners Lee</a> and his colleagues, were very smart people who did not only think of what they needed back then, but also of how the protocol might be used in the future. If you study the RFC in detail, you see that HTTP does not only offer two, four or five <a href="http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9">request methods</a>, but eight of them. Furthermore, it has a total of 41 status codes defined.</p>
<p>The downside of having something that has stayed the same for such a long time is the tendency we have to forget how good something actually is. As for the wheel, we need to take care we don’t reinvent internet protocols. Although the internet is all around us, only a small minority actually understands how it works. Most of them are engineers. And even engineers might have forgotten, or they might be younger than the protocols themselves and might never have learned it in the first place.</p>
<p>Nowadays, we are used to very quickly building small applications with API’s and frameworks. Seemingly, there is no need to understad the basic tools. Most engineers know at least the GET and POST Request methods and some of the <a href="http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10">status codes</a>, such as 200 or 404. But most of us don’t have a deeper understanding, even though HTTP is at the core of most of the web applications we build.</p>
<p>So what about the status code 418? Code 418 is part of the <a href="https://tools.ietf.org/html/rfc2324">Hyper Text Coffee Pot Control Protocol</a> (HTCPCP/1.0) which was introduced as an April’s fool in 1998. Code 418 is therefore neither listed in the official RFC, nor should it be used as a status code. If it still haunts the internet, it is because we are able to use the tools to build applications, but we don’t always understand the tools themselves.</p>
<p>If we use a 418 code, or if we don’t know the difference between a 301 and a 302 code, we damage the internet little by little. Let’s re-learn HTTP and re-discover what it has to offer. For example, a status code that is not very well known but from my point of view is very usable is <a href="http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.3">202</a> (Accepted). Only by understanding internet protocols can we push them forward and be ready for <a href="http://en.wikipedia.org/wiki/HTTP_2.0">HTTP/2.0</a>, which is lurking on the horizon and will help us adapt to the new needs of internet communication.</p>The internet is an integral part of our daily lives. We would be lost without the internet at work, and we depend upon it to guide us to a good restaurant or hotel on our holidays. None of this would be possible without internet protocols. TCP/IP, DHCP, DNS, HTTP and many other internet protocols have been around for the last 20 years. Even more amazingly, they have changed only very little.Getting started with the Internet of Things2013-06-08T23:53:00+02:002013-06-08T23:53:00+02:00http://www.ruflin.com/2013/06/08/getting-started-with-the-internet-of-things<p>The term <a href="http://en.wikipedia.org/wiki/Internet_of_Things">Internet of Things</a> has been around for quite some time. Until recently, however, the things were mostly offline and could only connect to the internet through a mobile phone, for example via scanning a QR code. This changed in the last months and years with quite a few start-ups that entered the market, such as <a href="http://ninjablocks.com/">Ninja Blocks</a> or <a href="http://www.smartthings.com/">SmartThings</a> and devices such as <a href="http://hitekelec.com/myknut/">Knut</a> or <a href="http://supermechanical.com/">Twine</a>, as well as larger companies such as Philips with the <a href="http://www.meethue.com/">Hue Lightbulb</a>. Some of them are based on standards such as <a href="http://www.zigbee.org/">Zigbee</a> or have their own implementation.</p>
<p>The introduction of very cheap and simple computers such as the <a href="http://www.raspberrypi.org/">Raspberry PI</a> and <a href="http://www.arduino.cc/">Arduino</a> had a huge impact on the internet of things. The sensors that had until then been mostly offline can suddenly be connected to a cheap computer which connects to the internet. This makes it possible to access the sensors at any time from any location, mainly from the smart phone. It is even possible to interact with the sensor, for example turning the light on or opening a door.</p>
<p>Most of these sensors can only do one thing and are basically “stupid”. But the power is and will be added through software. As these sensors are now all connected to each other, it is possible to write applications that interact with the sensors and make the whole system intelligent.</p>
<p>From my point of view, the Internet of Things just got started. The more the things disappear and the more intelligent the interaction is, the more it will help us in our daily lives, and we will completely forget how it was possible to live without it. It will take a few years until these kind of things will be built into houses, but I think we are now at a good point to finally get the Internet of Things running.</p>
<p>After playing around with a Raspberry PI for quite some time, I finally ordered a <a href="http://ninjablocks.com/products/ninja-blocks-kit">Ninja Blocks Kit</a>. I decided to go with the Ninja Blocks Kit for different reasons. First, I was only looking for a humidity sensor which I could monitor remotely. This is offered by various providers. What I like about the Ninja Blocks is that it is all based on Open Source. Not only the software is open source, but also the hardware plans are on Github. The product is at the moment probably more focused on Geeks than normal end users, but that’s how the whole thing starts. I would predict that in the next months there will be lots of small startups which build up on this basic infrastructure/service to provide end user friendly solutions for all kinds of stuff such as home monitoring, gardening and lots more.</p>
<p>I really look forward to getting my Ninja Blocks Kit. As soon as I get it I will post an update/review.</p>The term Internet of Things has been around for quite some time. Until recently, however, the things were mostly offline and could only connect to the internet through a mobile phone, for example via scanning a QR code. This changed in the last months and years with quite a few start-ups that entered the market, such as Ninja Blocks or SmartThings and devices such as Knut or Twine, as well as larger companies such as Philips with the Hue Lightbulb. Some of them are based on standards such as Zigbee or have their own implementation.From Joomla to Octopress2013-05-19T00:00:00+02:002013-05-19T00:00:00+02:00http://www.ruflin.com/2013/05/19/from-joomla-to-octopress<p>After more than a year, I finally managed to upgrade my private website / blog. My blog and gallery used to be based on Joomla, as I was a contributor to Joomla some years ago and had built some extensions which were running my site. Every time Joomla released a large update I ran into trouble because either I was too lazy to upgrade my extensions or another external extension didn’t work properly with the upgrade. Naturally, it always took me forever to upgrade my website (even for security updates) and when I did upgrade, some stuff was broken.</p>
<p>Finally more than a year ago, I decided to switch to something simpler, perhaps even something that I didn’t have to host myself. I tried different services like Tubmlr, Posterous, Wordpress and more. These are all great solutions and make your life easier when you only want to blog. What bugs me with all these solutions is that it’s again very hard to move to a different service in case one of these services stops working – like Posteurous recently.</p>
<p>I prefer to write my blog post not in a WYSIWYG editor even if it supports raw text. From time to time, I want to insert some JavaScript or other things that the editors manage to break. So I was looking for a solution where I can write blog posts in a standardized format (HTML, Markdown) if possible in my preferred editor. Because of Github pages I stumbled over <a href="https://github.com/mojombo/jekyll">Jekyll</a>. At first, I was very sceptical as I thought it is too limiting. On my old blog, I had some fancy extensions such as gallery and other stuff.</p>
<p>From Jekyll I moved to <a href="http://octopress.org/">Octopress</a>, as it offers some nice additions to Jekyll. What I really like about the solution is that it allows me to define my urls, have pages and posts and makes it really easy to deploy. The first plan was to migrate all blog entries from the old blog to the new one. As these were in two different languages and I couldn’t find an import script for Joomla, I decided to only migrate some blog entries related to Elastica and start from scratch.</p>
<p>So here is the new clean blog which will hopefully be filled with content again soon. At the moment I’m really happy with the new solution. Is is very easy to create blog entries and putting content online is just one command.</p>
<p>In case you miss some old blog entries you would like to have them online again, please let me now by sending a tweet to <a href="https://twitter.com/ruflin">@ruflin</a>.</p>After more than a year, I finally managed to upgrade my private website / blog. My blog and gallery used to be based on Joomla, as I was a contributor to Joomla some years ago and had built some extensions which were running my site. Every time Joomla released a large update I ran into trouble because either I was too lazy to upgrade my extensions or another external extension didn’t work properly with the upgrade. Naturally, it always took me forever to upgrade my website (even for security updates) and when I did upgrade, some stuff was broken.Include Elastica in your project as svn:externals2011-12-21T10:57:00+01:002011-12-21T10:57:00+01:00http://www.ruflin.com/2011/12/21/include-elastica-in-your-project-as-svn-externals<p>As most of you know, <a href="https://github.com/ruflin/Elastica">Elastica</a> is hosted on <a href="https://github.com/">github</a>, which means it uses <a href="http://git-scm.com/">git</a> as its <a href="http://en.wikipedia.org/wiki/Version_control">revision control</a> system. I have several projects which include Elastica but use <a href="http://subversion.tigris.org/">subversion</a> as its version control system. Until now, I included Elastica as an external svn source by hosting my own Elastica svn repository. But yesterday I discovered that the code from github can also be checked out through svn. I immediately asked google to get more details about this feature and discovered several blog entries on the <a href="https://github.com/blog/966-improved-subversion-client-support">github blog</a> which I had somehow missed.</p>
<p>It is not only possible to check out repositories, but also to check out some specific subfolders or tags and you can even commit to the repository (which I didn't test). As in my projects I only use the Elastica library folder and don't need all the tests and additional data, I check out only the lib folder. If you want to check out the Elastica lib folder from version v0.18.6.0, use the following line of code:</p>
<pre><code>svn co https://github.com/ruflin/Elastica/tags/v0.18.6.0/lib/ .
</code></pre>
<p>If you have a lib folder in your project with all your frameworks and libraries and you want to add Elastica as an external source (which is quite useful), you can set the <a href="http://svnbook.red-bean.com/en/1.0/ch07s03.html">svn:externals property</a> on your library folder to the following.</p>
<pre><code>https://github.com/ruflin/Elastica/tags/v0.18.6.0/lib/Elastica Elastica
</code></pre>
<p>If you already have other sources added as externals to your repository (for example ZF), just add this line below your existing lines. The next time you will update your repository, the Elastica folder with all its files will be checked out. To update to one of the next versions of Elastica, update the version number in the url in your svn:externals properties.</p>As most of you know, Elastica is hosted on github, which means it uses git as its revision control system. I have several projects which include Elastica but use subversion as its version control system. Until now, I included Elastica as an external svn source by hosting my own Elastica svn repository. But yesterday I discovered that the code from github can also be checked out through svn. I immediately asked google to get more details about this feature and discovered several blog entries on the github blog which I had somehow missed.Using Elastica with multiple Elasticsearch Nodes2011-11-21T13:05:00+01:002011-11-21T13:05:00+01:00http://www.ruflin.com/2011/11/21/using-elastica-with-multiple-elasticsearch-nodes<p>Elasticsearch was built with the cloud / multiple distributed servers in mind. It is quite easy to start a <a href="http://www.elasticsearch.org/guide/reference/modules/cluster.html">elasticsearch cluster</a> simply by starting multiple instances of elasticsearch on one server or on multiple servers. Every elasticsearch instance is called <a href="http://www.elasticsearch.org/guide/reference/api/admin-cluster-nodes-info.html">a node</a>. To start multiple instances of elasticsearch on your local machine, just run the following command in the elasticsearch folder twice:</p>
<pre><code class="language-sh">./bin/elasticsearch -f
./bin/elasticsearch -f
</code></pre>
<p>As you will see, the first node will be started on port 9200, the second instance on port 9201. Elasticsearch automatically discovers the other node and creates a cluster. Elastica can be used to retrieve all node and cluster information. In the following example first the cluster object is retrieved (Elastica_Cluster) from the client and then the <a href="http://www.elasticsearch.org/guide/reference/api/admin-cluster-state.html">cluster state</a> is read out. Then all cluster nodes (Elastica_Node) are retrieved and the name of every node is printed out. Every cluster has at least one node and every node has a specific name.</p>
<pre><code class="language-php">$client = new Elastica_Client();
// Retrieve a Elastica_Cluster object
$cluster = $client->getCluster();
// Returns the cluster state
$state = $cluster->getState();
// Gets all cluster notes
$nodes = $cluster->getNodes();
foreach ($nodes as $node) {
echo $node->getName();
}
</code></pre>
<h2>Client to multiple servers</h2>
<p>As elasticsearch is a distributed search engine that can be run on multiple servers, it is possible that some servers fail and still, the search works as expected as the data is stored redundantly (replicas). The <a href="http://www.elasticsearch.org/guide/reference/api/admin-indices-create-index.html">number of shards and replicas</a> can be chosen for every single index during creation. Of course, this can also be set with Elastica through the mapping as can be seen in the <a href="https://github.com/ruflin/Elastica/blob/master/test/lib/Elastica/IndexTest.php">Elastica_Index test</a>. More details on this perhaps in a later blog post.</p>
<p>One of the goals of the distributed search index is availability. If one server goes down, search results should still be served. But if the client connects to only the server that just went down, no results are returned anymore. Because of this, Elastica_Client supports multiple servers which are accessed in a round robin algorithm. This is the only and also most basic option at the moment. So if we start a node on port 9200 and port 9201 above, we pass the following arguments to Elastica_Client to access both servers.</p>
<pre><code class="language-php">$client = new Elastica_Client(array(
'servers' => array(
array('host' => 'localhost', 'port' => 9200)
array('host' => 'localhost', 'port' => 9201)
)
));
</code></pre>
<p>From now on, every request is sent to one of these servers in a round robin type. Instead of localhost, an external server could be used in addition. I'm aware that this is still a quite basic implementation. As probably some of you already realized, this is no safe failover method, as every second request still goes onto the server that is down. One idea here is to give a specific threshold for every server in which the respond time should be and otherwise the query goes to the next server. In addition, it would be useful to store this information on unavailable servers somewhere in order to use it for the next request. Thus, only one client has to wait for the unavailable server. Storing this information is somehow an issue, since Elastica does not have any storage backend.</p>
<h2>Load Distribution</h2>
<p>This client implementation also allows to distribute the load on multiple nodes. As far as I know, Elasticsearch already does this quite well on its own. But it helps if more than one node can answer http requests. Therefore, the method above is really useful if you use more than one elasticsearch node in a cluster to send your request to all servers.</p>
<p>It is planned to enhance this multiple server implementation in the future with additional parameters such as priority for a server and some other ideas. Please feel free to write down your ideas in the comment section or directly create a pull request on github.</p>Elasticsearch was built with the cloud / multiple distributed servers in mind. It is quite easy to start a elasticsearch cluster simply by starting multiple instances of elasticsearch on one server or on multiple servers. Every elasticsearch instance is called a node. To start multiple instances of elasticsearch on your local machine, just run the following command in the elasticsearch folder twice:How to Log Requests in Elastica2011-11-20T13:46:00+01:002011-11-20T13:46:00+01:00http://www.ruflin.com/2011/11/20/how-to-log-requests-in-elastica<p>In the <a href="https://github.com/ruflin/Elastica/tree/v0.18.4.1" target="_blank">Elastica Release v0.18.4.1</a>, the capability to log requests was added. There is a general Elastica_Log object that can later also be extended to log other things such as responses, exceptions and more. The Elastica_Log constructor takes an Elastica_Client as param. To enable logging, the config variable log for the client has to be set to true, or a specific path the log should be written to. This means that every client instance decides on its own whether logging is enabled or not.</p>
<p>The example below will log the message "hello world" to the general PHP log.</p>
<pre><code class="language-php">$client = new Elastica_Client(array('log' => true));
$log = new Elastica_Log($client);
$log->log('hello world');
</code></pre>
<p><br /></p>
<p>If a file path is set as the log config param, the error log will write the "hello world" message to the /tmp/php.log file.</p>
<pre><code class="language-php">$client = new Elastica_Client(array('log' => '/tmp/php.log'));
$log = new Elastica_Log($client);
$log->log('hello world');
</code></pre>
<p><br /></p>
<p>If logging is enabled, all request are at the moment automatically logged. There is a special conversion of request to log messages. The log message is converted to the shell format, so every log line can directly be pasted into the shell to test out. This is quite nice to debug and to create a gist if others ask what the query looks like. Furthermore, this makes it simpler to figure out whether the problem relates to Elastica or not.</p>
<p>For example the output for updating the number of replicas setting request for the index test would look like below.</p>
<pre><code class="language-console">curl -XPUT http://localhost:9200/test/_settings -d '{"index":{"number_of_replicas":0}}'
</code></pre>In the Elastica Release v0.18.4.1, the capability to log requests was added. There is a general Elastica_Log object that can later also be extended to log other things such as responses, exceptions and more. The Elastica_Log constructor takes an Elastica_Client as param. To enable logging, the config variable log for the client has to be set to true, or a specific path the log should be written to. This means that every client instance decides on its own whether logging is enabled or not.