The skills involved with software development don't have to be black magic. There are several great services that have launched in the past year to help individuals start to work with code, and to work on their programming skills. Learning to program is like learning a spoken language in that doing so has a lot of secondary benefits beyond the direct skill. Programmers have a well developed ability to create mental models for new ideas. They are able to adapt to change more quickly, and can develop a consistent logical framework for thinking about the world or a specific situation.
Codeacademy.com has one of the best training programs anywhere for people who want to learn coding. In my experience, tools like this are in many ways superior to formal training or education. While formal training or schooling can give you a lot of the theory, background, and context, services like Codeacademy focus the user on writing code and solving problems. This real-world style experience where you aren't answering questions, but actually writing programs that are being run is a great way to experience not only the fundamentals, but also the power of a programming language.
When I first learned programming, whenever I encountered a new language's Hello World, I always got a smug feeling of satisfaction from the fact that instead of printing "Hello World", I would write a personalized message like "Bonjour Monde" or "Sup world?". This isn't a major feature that people typically discuss, but the ability for learners to deviate from the course in ways that they find interesting is critical in my opinion to the development of these types of creative skills.
Google needs to move faster with Android and adopt a fixed release cycle. The recent launch of Android 4.2 caused several issues for Nexus device users. Everything from experiencing new sources of lag and stutter in the user experience, to completely losing December in the contact application. Some may take this as an indication that Google needs to move slower with their Android releases and perform additional testing stagegates. My feeling is that this is the exact opposite of what they should do. Google, and modern software development in general, benefits from going faster.
What I mean by going faster is building more frequent rolling updates to the platform that do not require user intervention. Chrome for Windows/Linux/Mac is the gold standard representing this ideal. Chrome development happens in the open, meaning all source code for the application is developped using open source licensing, and is immediately availlable after developers commit it to the Chromium project. The Chrome team also has a fixed release cycle, meaning that every 6 weeks all of the users of the software will receive an update. Users can additionally opt in to what they call the Beta or Developer channel to get early access to features, in exchange for giving up stability.
If Android moved to a fixed release cycle, it would have many benefits. First of all, Partners such as Carriers, Manufacturers would be able to develop standardized processes around the adoption of Android. More frequent, smaller updates to the Android platform would force Google to improve their update capabilities. Right now non-Google Android devices are lucky if they are upgraded for a full year after launch. If these devices were build and planned with the expectation of a 6 week release cycle, Google would be forced to improve the robustness, speed, and quality of the Android update mechanism. Making this process transparent and irrelevant to users would be a huge win for the Android platform in general and would continue to encourage user adoption.
Google's relentless quest to improve performance for all things technologic continues. They have just hit a huge success with SPDY, their replacement for HTTP for web transactions. Now, with a few simple linux commands, you can download, install, and activate an Apache module that will speed up everything for all of your supporting Chrome and Firefox users. This is a great thing to do because it is one half of the SPDY adoption question, and will help move the entire web forward into a faster, newer generation. Fortunately there is no downside. Users not yet capable of SPDY will imply failover to the slower HTTP transaction.
Install SPDY on Apache by doing the following (assuming a 64 bit system):
That's it, you are done. You will now see Chrome users negotiating with SPDY. YOu can verify this by visiting Chrome Network Internals
HexSLayer recently launched a few new versions. In the Android Marketplace (Now known as Google Play), HexSLayer 1.0.19 was launched, and in the Amazon App Store, HexSLayer 1.0.20 was launched. These may be the last updates for a few weeks, as there has been a major update to the packing system I use, known has PyGame Subset for Andoir. Both of the published versions of HexSLayer use PyGame Subset for Android (PGSA) version 0.9.3. This has been a great framework for me, as I have been able to write applications and games in Python, and then launch them simultaneously on Windows, Mac, Linux, and Android.
With the latest update of PGSA to version 0.9.4, there has been a change to how the system packages applications. Before, the system would copy over the raw Python source files (or .py files), but now the tool packages only the optimized and compiled versions of Python applications (the .pyo file). This means that if you come from an earlier version on Android and upgrade, you will end up having both versions on your device.
The problem with this upgrade path is that it leaves both versions of the files on your device, and the Python subsystem by default runs the old version of the application. That means that there is new code on your device, your phone will always run the old code. This has been the core issue preventing me from releasing HexSLayer 1.1.x.
I wrote to the author of PyGame Subset for Android, and discovered that he had a fix prepared that will allow me to continue using the old packaging method, while waiting for my users to upgrade to a version compatible with the new method.
As soon as PGSA 0.9.5 is release, I should be able to repackage and release the next set of features and bugfixes for HexSLayer.
Thanks for playing!
The primary audience for this article is those who already have a working Android development environment, and are looking to get the latest and greatest from Google running in the emulator. Unfortunately, for me, the default emulator settings that ship with Ice Cream Sandwich (ICS) do not enable me to actually run an emulated instance of the application.
Fix the Ice Cream Sandwich Settings
To make everything work, I updated eclipse to the latest AVD and SDK versions using both the Eclipse updater, and the new SDK manager. Then I downloaded API Version 14 of the SDK, platform, etc. I opened the new AVD manager and created a new AVD targeting ICS.
The default properties of Max VM application heap size of 24 and Device ram size of 512 resulted in build errors for me. I increased these properties and the machine booted like a charm (albeit extremely slowly).
I increased the Max VM application heap size to 64 and the Device ram size to 1500. You may be able to get away with less, I haven't tried it. Make sure you have plenty of unused RAM before you attempt to increase the RAM available to the AVD.
That was it for me to get it working. Have fun and build some great applications!
PNG is a great free lossless, compressed image format. One thing that may surprise you is that PNGs can actually be further compressed, making the filesize smaller than a normal image application will typically achieve. The good news is there there is a tool called optipng that makes it easy to non-destructively improve the layout of PNG files.
On Ubuntu, install OptiPNG with sudo apt-get install optipng. Once it is installed, you can run it its default mode by typing optipng file.png. If you feel like giving the application lots of time to achieve aggressive compression, you can use the -o flag to indicate additional compression. Compared between the default and -o7, I have found only minimal additional compression that has almost never been worth the additional CPU time.
Below are the results of one such compression attempt:
In May at GoogleIO, Google announced that it would begin supporting multiple .APK files, and finally this feature has come to fruition. To get started using this new feature, visit the Android Developer Console and click on your application. At the top of the application information you will notice that the upload .apk feature is now gone. It has been moved to a new tab at the top of the window, "APK files".
Using the Multiple APK Tool
Each of the .apk files that you upload to the Android Developer Console are examined. The system looks at the MinSDK version and the Target SDK version. The new features allow publishers to maintain multiple .apk files that target different screen versions, different version numbers, and OpenGL support. This is simultaneously a boon, and a potential disaster. It should decrease the difficulty of maintaining code that targets and supports different feature sets, but at the same time it supports the further fragmentation and differentiation of different Android devices. In an ideal world, the platform would be standardized in such a way that multiple versions wouldn't be necessary. In the practical world, Google continues to update and improve the API and platform. They break backward compatibility, and come up with new and better ways of performing software development and achieve better user interactions. This means that multiple .apk files will end up supporting the practical fragmentation of the platform.
One of the best ways to leverage these new abilities will be to fork applications at major points in platform revisioning. For example, it will soon make sense to have 3 different APKs, depending on how much historic support is desired. The first group would be platform versions 1.5 - 2.1. The second group would be 2.2-3.2. The final group would be the new Ice Cream Sandwich 4.0 platform version.
This morning I watched Google IO and learned several important details about android development that are not readily transparent as one begins developping Android applications. Fortunately, these items are extremely easy to address, and with a few minor changes, the usability and quality of your applications will drastically improve. I loved the presentations given during Google IO, and I could watch days of these sorts of tips, as Android development is a vast and expansive topic.
Tip 1: Specify TextType
This tip has to do with any EditText objects that you include in your application. When building android applications with text fields, you should inform the platform what the intent of the edit text is. You can do this in one of two ways. First is by editing the layout used by your application. Here is an example layout that includes an EditText that has been specified as a URL. This will ensure that the keyboard, whenever possible, shows the most appropriate keys for URL entry.
Use the code as follows:
The second menu is to set the setting via the UI.
Tip 2: Specify Audio Stream Control
This one is a quick fix. By default in Android the audio hard buttons control whatever audio is most foreground. This means that if you have audio playing, it will control that audio, otherwise it will default to the ringer. This one is easy to fix by calling the following code in your OnCreate:
Tip 3: Specify Target SDK Version
The final tip is something that I most likely missed as part of Android Development 101. By default, your application will not specify a minimum or target SDK. Not specifying a target SDK version means that the Android platform will assume your application uses the lowest SDK version, or uses the minimum SDK version that you specified. This is becoming more important starting with Honeycomb, because Honeycomb uses the Target SDK version for choosing some of the visual elements to show. The example that lead to my discovery of this was the menu icon. If you target the wrong SDK version, the menu icon will be shown, regardless of if you have a menu or not.
Fix it by specifying the correct versions. For me at the time of writing, they are as follows:
When developing web pages for mass consumption, one of the most important things you can do with your content is to make sure it is being transmitted in a compressed manner. Typically for web pages, the way to achieve this is to gzip your content. If your browser supports this, it will report its support to the browser as one of the headers when it requests a page. When the web server receives this, if it has the correct setup it will render the content, place it into memory, run a compression algorithm on it, and transmit it. On the client side your browser will then take the compressed stream, uncompress it, and render the page to the user. This may seem like a lot of steps, but in actuality, the slowest part of the request/transmit/rendering process for web pages is usually the internet transaction, and compression can speed things up many times.
When setting up this type of compression, or when testing the setup of an existing server, it can be very helpful to have a tool that will report whether a given URL is being sent gzipped or not. In order to address that, I have built a GZIP Detector. Simply give it a URL, and it will tell you whether or not that URL is being transmitted in a compressed format. There are many other ways to detect this, but they aren't always handy, or they require specific software or browsers.
Hopefully this GZIP Detector will provide value to someone besides me.
Session management in PHP is one of its greatest strengths. The only downside of its session management is in the expiration period for sessions. By default in Ubuntu the timeout is set to about 24 minutes, which is fine for most situations. When doing article editing or other things that can take more than 24 minutes, however, it's frustrating to submit a form and have a session expiry lose the data you have been working on.
One way to overcome the time limit is to have regular automatic saves done via AJAX, but another technique is to increase the session time. In order to change the session timeout, the change needs to be made in the PHP configuration. In some distributions, this file is stored in /etc/php.ini. In Ubuntu and some other distributions, this file is stored in /etc/php5/apache2/php.ini. This setting is controlled by session.gc_maxlifetime. What this setting does is to control the session cleanup. After making any changes to this setting in the php.ini, make sure you restart apache to ensure that the version of PHP being used by Apache is updated with your configuration changes.
In PHP, sessions are probabilistically erased at regular intervals after gc_maxlifetime. By changing this max lifetime, you increase the time for which the session will be valid. You could also make changes to the divisor or probability of cleanup, but I have found that these settings can simply be left to their original values without negatively affecting the functioning of the sessions.
After having a few pending changes to the Chromium extension Reddit Imager for a few months, I finally got a chance to make some very minor but updates to this tool.
The simplest of the changes was adding a few extra strings to the matcher, so now imgur links using "gif" "png" or "jpeg" will match and show full size on Reddit.
The bigger, more important change was adding some naive support for imgur links that aren't direct links to images. Formerly, there were many links that were not supported by the tool. These were links to an "imgur page" that had advertisements and content, rather than just an image. Now, the reddit imager detects and relinks these so that the true image is shown full size, rather than just leaving the link alone.
Apache has a very decent authentication scheme. In Apache you define what are called "Authorization Realms". You can define these in your apache configuration files on a directory or site, or you can define them in a .htaccess file. Each of the Authorization Realm specifications will point to a password file and will look like this:
This file defines a few things, it defines which users to allow, where the passwords are stored, and the name of the Authorization Realm. The name of the authorization realm is important, because you can have multiple layers of authentication. For example, you could have a folder that any valid user can access, and a subfolder which only 2 specified valid users have access to. If you ensure that the AuthName is set the same, the user will only have to enter their username and password once, whereas if you have different AuthNames, the user will be required to pass into each realm separately, even if the password file is the same.
Important Security Note
Any time you use Authorization through Apache, you should ensure that your users are connecting with HTTPS, otherwise the passwords will be sent in plain text, and anyone listening in the middle could see and capture them.
I need to make some decisions soon about MortalPowers.com. This includes questions about the appearance, functionality, the focus of my time, and what sort of browsers and codecs I want to support. To answer these questions, I'm going to try to go back to the original question, which is asking myself what are my goals, and why am I building this site.
What is my goal with this website? My goal is to increase traffic, to provide a place for my development projects to live; grow; and be shared, and finally to meet the needs of my visitors.
The last goal is going to be one of the most central, because it will inform the first two. But to answer this question, I need to know as accurately as possible, who are my visitors? I'm going to use the last month as the window and breakdown my traffic from multiple source (Google Analytics and AWStats primarily).
Who visits my site?
Right out of the gate, 8% of my visitors will be unable to view any HTML5 videos. At this point, I'm comfortable instantly dropping the 8% of my visitors that still use Internet Explorer. This is partially a philosophical decision, but it's also in part because most of the people that come using that browser will have no interest in anything I have to say.
Only about 3% of my Firefox users will be unable to view my Ogg Theora videos. This indicates that Firefox users tend to update their browser relatively frequently. This is great, because Firefox and Mozilla have committed to support to a lot of the technologies I want to use and support.
It seems that a major plurality of my users (45.5%) still use windows to access my site. This is acceptable to me because Windows users can still be interested in Open Source, in software development, or even interested in trying or understanding Ubuntu. I will try to do what I can to support these individuals. As an interesting tidbit, Windows use ends up breaking down as follows for me: 43.9% use Windows 7, 39.22% use Windows XP, and 14.03% use Vista, the rest use Windows Server 2003.
Less than 7% of my users have screens of 1024 width, everyone else has 1280 or much greater.
What this means for Video
One of the goals that has most recently bubbled to the surface for me is the idea of supporting my video gallery on mobile. I'm continually disappointed with YouTube's mobile site because their video collection is a subset of their full video gallery. Looking at the video support offered by the users of my site, I can cover almost all of my desktop users with Ogg Theora, and until WebM becomes a universal standard, this will be the universal standard. I'm not anticipating WebM to become available across my user base until at least 2012, but time will tell. On mobile, the only option is to use .h264. Based on these competing values, I'm going to attempt to have 2 encodings for each of my videos. I'm not sure yet whether this will mean I have a mobile version of the gallery, or whether I will try to use a single HTML5 video gallery with multiple sources, and I'm definitely open to feedback.
Over the next few months and years, I'm anticipating that mobile usage of my website will grow from the lowly 1% to 20%-50%. Google itself has declared a "mobile first" strategy, and the human workflow of using a phone seems to be much more natural than the bulky process of using a laptop or desktop computer, despite the speed, cost, and additional capabilities provided by desktop computers.
I hate the idea of supporting the .h264 standard, but until WebM is available, I'm going to have to make a decision in favor of my users, rather than in favor of my philosophy that Android and iPhone and Internet Explorer seem to have already rejected. It still makes no sense to me why the mobile phones and Internet Explorer don't at least offer Theora video decoding, since the software is free (as in speech) for them to implement.
Someone asked for an example of Apache Virtual Hosts configuration. All of my virtual hosts have the same format as the following two examples:
Types of Virtual Host Configurations
Name Based Virtual Hosts
There are two main types of virtual host configuration. The first called "Name Based" virtual hosting matches the examples above. In this case, the domain name provided as part of the request determines the site that apache matches. Name based hosting does not work with SSL because Apache doesn't know the domain name requested until AFTER the encryption negotiation is complete.
Address Based Virtual Hosts
The second type of virtual host is address based hosting. This will only be used when the server has many IPs. The server uses the interface (server ip or port) of the incoming request to determine the site to respond with. This is almost always used to establish SSL hosting in combination with name based hosting.
Example address based virtual hosts:
Dynamic Virtual Hosts
The third type (which is new to me) is mass or dynamically configured hosts. These hosts use the IP or Domain name to determine the DocumentRoot of the site. Read the Apache spec here. Example of what would go in your httpd.conf below:
The %0 when used with VirtualDocumentRoot replaces %0 with the domain name.
For the longest time I have been perplexed by the issue of an html div tag and its children being affected by a float present in the grandparent element. The behavior I was seeing was that whenever I would float any element inside of my div, it would be pushed down to below the floated elements in the grandparent element. This was definitely not desired, as it made the vertical location of the new floated element unpredictable. Researching this issue lead me to very few details on the topic.
In the end, going back to the W3c specifications gave me my answer. Specifically the section on Block Formatting was useful. It seems that what I really wanted was to set up a new block formatting context. To do so, I had to set the overflow property such as overflow:hidden;. Doing so set up a new context, and allowed my elements to float as desired., while ignoring the grandparent context.
As a strange and useful side effect, I was also able to remove the margin-right:200px; from my main column CSS, as apparently that part is handled just fine by the overflow:hidden;. Testing in Firefox and Chromium show that this works great and looks just fine. If someone wants to test it in another browser and let me know how bad I have broken my site, I'd appreciate it, :).
One of the most common reasons to build a tool for an open source developer is when the developer him or herself wants the tool. This is the case with Reddit Imager, a tool designed to improve Reddit. The concept is simple, whenever there is a URL or image format we recognize (very few right now), replace the tiny thumbnail with the blown up version, maximized to the screen width. I launched this extension on the Chrome Extension Gallery because I wanted to try out the gallery at least once, despite my dislike of giving Google control. Now the extension has a couple users, and I want to make sure they get the features and capabilities they want.
What's new in 1.1?
Reddit Imager 1.1 released today adds an options page where users can specify their mode. Normal mode automatically replaces matching thumbnails with their blown up versions. I also added activated mode, which waits for a user to try to click on the thumbnail before blowing it up. I find this feature a little confusing because there are very few thumbnails we are actually capable of blowing up, but it was requested and I wanted to try it out.
Let me know if you think I should switch default modes, add other options, or add some type of visual indicator for which images can be blown up on the home page.
Report issues or feature requests here: http://github.com/PeEllAvaj/redditimager/issues
One of the tools that I end up using a lot of the time is GZIP detection. It's really useful to find out if a given URL uses GZIP. It's easy to do this with Firebug, or with Chromium's Developer tools, but these tools aren't always available. GZIP is important to save bandwidth for content sent over the internet, and detection is the first step to understanding.
Check out the GZIP Detector.
Something I have been struggling with for the past few months is the concept of building additional backlinks. Backlinks are links from other websites that refer to your content. These links will bring visitors as well as authority to your site.
The best guide I have found on this comes from Matt Cutts at Google. He breaks the techniques down as the following:
- Build Great Content - Filling your site with quality content that is useful or interesting will bring users and traffic.
- Controversy - Discussing or making assertions causing or surrounding content can attract links, but can end up hurting your relationships with other people or sites.
- Participate in the Community
- Answer Questions
- Help Other People
- Original Research - If you spend time to produce research on or to understand a new topic, or an old topic with a new approach, your content will be unique and will attract new users and traffic.
- Newsletters - Make it easier for your existing content to be delivered to users.
- Social Media - Connect to the people in the places where they spend their time.
- Present or Give Speeches - Giving a speech or lecture as an expert on a topic will drive traffic to the web content you have produced.
- Create How-Tos - Very specific how-to's can help save you and others future time, and can match up well with people looking for these answers. Having a resource that no one else has is extrmely useful.
- Make a Product or Service - Solving a problem will help bring people to you, as well as contribute back to the community and the world.
- Have Good Site Architecture - Being able to crawl and bookmark your site ensures people will find it easy to link to your site and your content.
- Make Videos - Simply talking to a camera can add personality and value and entertainment to your visitors.
Whenever you have form data, it is important that you communicate with users what is happening with this form data. The first degree of functionality that can be provided to users is notifying them when they have modified a form and they attempt to leave the page without saving the data. The further degree that the user interaction can be taken to is auto-save, but here we will limit this article to user communication.
There are a lot of ways to achieve this type of user communication, and I'll just be covering one of these. The first step is to build a change detection mechanism.
From there, you should hook up the page events and change detection to the forms that should be saved before leaving the page. This method currently assumes that users must submit the form and leave the page in order to save the changes.
The final piece that you need to do is hook up your form submission to allow the unload of the page. You do this because the form submission is the safe way of leaving.
With those changes, your users will now be alerted before attempting to leave the page with unsaved data, hopefully saving your content from certain destruction.