24. Feb 2015
23. Feb 2015
19. Jan 2015
16. Jan 2015
15. Jan 2015
12. Jan 2015
06. Jan 2015
17. Dec 2014
12. Dec 2014
10. Dec 2014
09. Dec 2014
03. Dec 2014
02. Dec 2014
21. Nov 2014
Someone asked on Slashdot what NAS he should buy. As usual, the clowns have a field day on the comments (even at threshold 5). The usual
You should not need one if you do foo,
Why add a single point of failure, and other Funny comments. I actually happen to have a NAS at home for more than a year. Here is what I wrote a year ago:
I needed more storage and after waiting months for prices to come down (I'm sure they will soon...) I decided to buy what was apparently the Mercedes of small NAS systems, the Infrant ReadyNAS NV. My decision was based on several reviews, the most accurate is probably this one on Toms Networking. The problem with reviews is always that reviewers focus on the wrong thing, or on something that doesn't interest you. For example, I don't care at all what the performance of the NAS is, and I don't think many home users (the target audience) should care. In the end it was the actual manual of the NAS that made the difference. I also found it interesting that Infrant does their own CPU, RAID controller, and board design, others seem to just throw Linux at some industrial ready-made board or use even software RAID.
So my review is going to be rather short but I'll give you the interesting details that really make you buy or hate a device like this.
First, I want to be able to upgrade the NAS in two years down the road. I'm starting with 4 x 400 GB now, which in RAID5 leaves me with about 1200 GB net storage capacity. In two years I plan to throw in 4 x 750 GB or whatever is then affordable. I want to upgrade the disks without having to backup the data first. I'm surprised that no review of a home NAS is talking about such a feature, or at least highlighting it more. This is critical: home users don't have another 1 TB in cold standby to copy stuff to when they upgrade. They also can't afford to
just buy a completely new NAS in two years, because these things are still really expensive considering what they do (I paid about 1300 EUR with 2 x 400 GB installed).
The ReadyNAS NV has a smart RAID mode that upgrades your data on-the-fly to the higher levels, depending on how many disks you have installed. I started with 2 x 400 GB, automatically in RAID1. Once I plugged in the third 400 GB disk, the NAS migrated the volume to 800 GB RAID5 automatically (took a few hours). When I inserted the fourth 400 GB disk, I had 1200 GB in RAID5 ready. Consider upgrading: Remove a 400 GB disk and replace it with a 750 GB disk. The smart RAID setting will use only 400 GB of that disk. Replace all remaining three disks with higher capacity models. Once you replaced all, the whole RAID will upgrade and utilize the free space on the bigger disks. Perfect.
What else is there to say about the ReadyNAS NV. It's very very small, barely larger than the 4 disks stacked, it is silent (no matter what the reviews tell you) and I have no problem having it next to my desk. The admin interface works great, every feature I tested worked (SMB, AFP, iTunes streaming server, permission system, etc.) great and without any issues.
If I write a large file, I max out my 100 MBit network completely with about 8-10 Mb/s in SMB. Reading is the same. Smaller files are a little slower, but it's fast enough for everything I'm doing.
Summary: The ReadyNAS NV is really the Mercedes of home NAS systems, at least I have absolutely no complaints about it and I'm perfectly happy with what it does and how it is designed (and that is rare).
Now, one year later, I have a few complaints. The machine still works fine and I would probably buy it again. However, the power supply died with a funny ozone smell about 5 months ago. If you search the Infrant forums, you will find other people with the same problem. So off I go, through the usual support channels, trying to get the unit fixed and the power supply replaced. Of course, this was the same time that Infrant was bought (for some 60 Mio cash) by Netgear. So, support suffered - big surprise. After several phone calls, and ping pong between Netgear and Infrant support, I finally figured that the easiest way was to send the unit back to the guy I bought it from, a local reseller in Switzerland. After four weeks it came back and it was working again. So I'm again happy with the NAS. Buy it, if you don't mind some support issues for the time being.
Well, it was a very pleasant experience - I booted the Fedora KDE LiveCD, and ran the Fedora installer from the desktop which took about 20 minutes. Once I had rebooted, I used the package manager to the /Java Development/ set of packages (which gave me Iced Tea and ant). I then downloaded and unzipped JBoss AS 4.2.2.GA and Seam 2.0.0.GA, deployed the booking example, started JBoss AS, and booked myself a suite!
I did notice that the application seemed more faster than normal, so I took a look at my favourite completely subjective performance measure -- how long it takes JBoss AS to start with just the booking example deployed -- and it seemed good at around 20s.
This piqued my interest, so I did a highly unscientific test and installed the Sun JDK 1.5.0_14 and the Sun JDK 1.6.0_03, and (using Seam and the example compiled by JDK 1.5) took a look at how long the server takes to start.
I found that using JDK 5 to boot the server it took 32s, using JDK 6 it took 25s and using Iced Tea (JDK 7) 21s -- definitely going in the right directions! I then compiled Seam and the example using Iced Tea, and (running JBoss AS using Iced Tea) got a startup time around 19-20s.
Of course, this no match for a real performance test, but I found it interesting.
To celebrate the new release of JBoss Tools, I'm going to walk through some of the features of JBoss Tools that are interesting to Seam developers.
There are two perspectives that are of interest for people using Seam: the Seam perspective and the Hibernate perspective:
The Seam perspective features some very useful wizards in the New menu:
The first thing you'll want to do is create a Seam Web Project, by following the wizard:
Next, create a Seam Action:
All Seam components are easily accessible from the Seam Component View:
Even better, they're autocompleted whenever you start typing an EL expression:
Even property names are autocompleted (JBoss Tools is even smart enough to understand generic types!):
We can run our application from the Run menu, or from the Servers View. JBoss Tools automatically deploys changes incrementally, a /big/ improvement over the Ant-based solution used in seam-gen:
The most impressive feature of JBoss Tools is the visual page editor, which does a great job of previewing complex Facelets pages with RichFaces controls, standard JSF controls and even Facelets templating:
Of course, autocomplete and hyperlink/F3 navigation to Seam components and Seam component properties also works in the visual editor:
There is a visual editor for web.xml:
And one for components.xml:
Autocomplete and hyperlinking/F3 work here too:
If we use Seam Generate Entities, we can reverse engineer an application from a database schema, or from existing entities:
And, switching to the Hibernate Perspective, we can browse the entities via a treeview:
Or via a full visualization of the mapping:
Update 16.11: The update site is now in-sync with CR1 too, Use http://download.jboss.org/jbosstools/updates/development in Eclipse.
This release brings JBoss Tools very close to a final release, but to get there we are going to need help from the community to test out the release. If you haven't already done so, please try out JBoss Tools and if you find any issues bring them up in the forum or open issues in JIRA.
We have also started doing some small movies to show of the functionallity in JBoss Tools and Red Hat Developer Studio.
The following shows the steps to create a JBoss Seam project with JBoss Tools and how to run and edit the project.
While there are many things I like about the Java language, the lack of unsigned types has always bothered me.
According to Gosling, they were too complicated:
Quiz any C developer about unsigned, and pretty soon you discover that almost no C developers actually understand what goes on with unsigned, what unsigned arithmetic is. Things like that made C complex.
Ironically this kind of hand-holding tends to introduce other complexities that are often more difficult to deal with than the original solution. In this particular case, leaving out unsigned types doesn't stop the need to work with unsigned data. Instead, it forces the developer to work around the language limitation in various unusual ways.
The first major problem in this system is that byte is signed in Java. Out of all of the code I have ever written, I can only think of a select few situations where I needed a signed byte value. In almost all cases I wanted the unsigned version.
Let's look at a very simple example, initializing a byte to be 0xFF (or 255), the following will fail (note: this works in C#, since they made byte unsigned):
byte b = 0xFF;
Java will not narrow this for us because the value is outside the range of the signed byte type (>127). We can, however, work around it with a cast:
byte b = (byte) 0xFF;
If we are clever and know twos compliment we can use a negative equivalent of our simple unsigned value:
byte b = -1;
This is just the tip of the iceberg though. A common technique used in search and compression algorithms is to precompute a table based on the occurrence of a particular byte value. Since a byte can represent 256 values, this is typically done using an array with the byte value as an index, which is very efficient. Ok so you might think you can do the following:
byte b = (byte) 0xFF; int table = new int; table[b] = 1; // OOPS!
While this code will legally compile, it will result in a runtime exception. What happens is that the array index operator requires an integer. Since a byte is specified instead, Java converts the byte to an integer, and this results in sign extension. Again, 0xFF means -1 for a signed byte, so it gets converted to an integer with a value of -1. This, of course, is an invalid array index.
To solve the problem, we must use the bitwise-and operator to force the conversion to occur in the correct (yet unintuitive) way like so:
table[b & 0xFF] = 1;
This technique gets ugly quick. Take a look at composing an int from 4 bytes (Ugh!):
byte b1 = 1; byte b2 = (byte)0x82; byte b3 = (byte)0x83; byte b4 = (byte)0x84; int i = (b1 << 24) | ((b2 & 0xFF) << 16) | ((b3 & 0xFF) << 8) | (b4 & 0xFF);
These issues have in turn lead to odd API workarounds. For example, look at InputStream.read() , which according to its docs is supposed to return a byte, but instead returns an integer. Why? So it can do the & 0xFF for you.
We also have a DataOutput.writeShort() and DataOutput.writeByte() that take integers instead of the their respective types. Why? So that you can output unsigned values on the wire. On the reading side we end up with four methods DataInput.readShort(), DataInput.readUnsignedShort(), DataInput.readByte(), and DataInput.readUnsignedByte(). The
unsigned versions return converted integers instead of the described type names.
To add to the confusion, we also have 2 right shift operators in this signed only mess. The
unsigned right shift operator, which treats the type as if it were unsigned, and the normal right shift, which preserves the sign (essentially acts like divide by 2). If we want to get the most significant nibble value of an integer we need the unsigned version.
int i = 0xF0000000; System.out.printf("%x\n", i >> 28); // Returns ffffffff! System.out.printf("%x\n", i >>> 28); // Returns f, as desired
So I ask all of you, was all of this hassle worth leaving out the simple and well understood
unsigned keyword? I think not, and I hope anyone who considers doing this in another language they are designing learns from it. At least C# has.
||Showing 1026 to 1030 of 1239 blog entries||