Pixel 5 Gap Gate: It's not only the gap!

In April this year I bought a Google Pixel 5 phone to replace my old Samsung Galaxy S7. I was searching for a phone that has a small size but still all new features and good camera. The Pixel 5 was the ideal candidate, especially as its NFC sensor for Google Pay and my blood sugar sensor Freestyle Libre works much better. In addition the camera is producing fairly good pictures. The aluminium body makes the backside much easier to hold without a cover (Samsung Galaxy S7 is impossible to hold and work with if you don't have a cover on the back side).

The new phone - a quick review

When I got the Pixel 5 in April, everything was looking great: I was really happy to have the phone. The software was much better than Samsung's bloatware (like Samsungs horrible Bixby). Phone was very fast and it was a pleasure to work with. After unpacking it, I also checked, if the phone is affected by the Pixel Gap Gate (#PixelGapGate) - which wasn't the case! Mentioning the well known "Pixel Gap Gate" is also the reason for this blog post: After heavy use for a few weeks, mostly in office at home, and inductive charging (which heats up the phone more than by charging with USB3 cable), I figured out that my phone was indeed affected by #PixelGapGate, it just took longer. On the upper side of the phone, the gap between body and display started to open a bit. During the time it got larger. In June it was approximately as large to put your fingernail inbetween:

At first, I did not notice any other issues, so I was happy with Google's statement about #PixelGapGate.

Proximity sensor issues started

Starting of July, when Corona restrictions were loosened, I spent more time outside. I did a large number of phone calls without bluetooth headset: Back to life! I noticed quite fast that the phone screen went dark as soon as you started a call: You enter phone number, press "Call" button and half a second later, screen went dark. At first, I thought that this might be an issue of the Google Phone app, but after several updates with Play Store this did not change. So I started to investigate: It took a while to get the connection:

  • Phone screen only went dark when you did not use a headset. It did not matter if you dial yourself or somebody else is calling you. It just happened instantly!
  • In the car and with a Bluetooth headset, the phone screen never went dark.
  • If you call with loudspeaker turned on, the screen also does not turn dark.

Those two facts lead me to the right direction: Why does the phone screen normally wents dark? YES! Because of you move the smartphone to your ears! It is just a protection, so you not accidentally hit on-screen buttons while you listen to the phone call. If you move the phone away from your ears, screen switches on again.

You can easily test this: Start a phone call with any Android device. Screen should be turned on. Then move your hand at the top of the phone and when it gets closer than 5 centimeters, the screen switches off. If you move away the hand, screen goes on again. Responsible for this is the proximity sensor, which is a default on all smartphones (it was also installed on most old Nokia or Siemens Phones of the pre-smartphone era).

To further investigate, I installed a proximity sensor test app. I also did this on my old phone, and then the issue was obvious: After starting the app on the Pixel 5, the sensor app showed a green frame and the text "NEAR". On my old phone, the proximity sensor behaved as it should: The frame was colored red and the text "FAR" appeared. As soon as I moved the hand over the sensor of the Galaxy S7, screen went green and showed "NEAR". In contrast, on the Pixel 5, the sensor did not react to my hand at all.

Digging a bit more, I figured out, that the sensor works, if you take your other hand and then press against the screen on the top-left or the top-right corner of the screen (not in the center, because the sensor is located there). If you then move the other hand in front of the sensor, the test app confirmed, that proximity sensor was reacting in the same way like on my old phone. I searched Google and found a lot of complaints about that:

One person also posted a video, which exactly reproduced what I have seen (source: https://drive.google.com/file/d/1S-nvsvxE_OyiqyS1DURfDLt9vTGyLCkT/view):

This quite clearly illustrates my issue. I was happy that I was not alone!

What does it have to do with #PixelGapGate?

It is just bringing together the two issues: There is a gap between screen and body (#PixelGapGate) and the proximity sensor issue is solved by pressing on the screen! To explain how this works together, you have to understand how the gap sensor is installed on the phone: In older phoes, the gap sensor is at the top of the phone as a separate hole next to the earphones. In most cases it is powered by a capacitive or infrared sensor (you can test this if you put other objects than fingers in front of sensor, infrared sensors detect only fingers/ears, capacitive sensors detect almost everything). The problem with newer phones is that the whole front of phone is the display, so there is no space to insert a small hole. In fact, the Pixel 5 proximity sensor is at the top center of the display and mounted behind the display glass. If it is enabled, you can see some white flickering at top center of screen:

This flickering already annoyed a lot of people, because you can see it and it looks like some broken pixel of the display. Some applications like Google Home or the N26 banking app all the time query the proximity sensor so it is flickering all the time. On my phone I disabled all those apps, so the proximity sensor is only active during phone calls and when the test app is running.

Because the proximity sensor is mounted behind the display glass, the display glass also has an effect on it. It looks like on some Pixel 5 phones, there is a gap between the display glass an the sensor, so when you press on the glass it gets closer. If there is a small gap, the sensor "thinks" there is something in front because there's some adjustment in the sensor, so it looks through the display if it is close enough. Somebody else also told me that the proximity sensor needs to be shielded from light coming in from the side, which is obviously no longer the case if there is a gap. And now you see the connection to #PixelGapGate: GapGate causes an additional gap: If the glass moves away, there's also a gap between sensor and display glass and something in the sensor therefore detects an object in front of the display.

I contacted Google Support and they told me that they don't know about such a problem and that the Gap issue is no issue at all. Phone is still water resistant. But they offered to replace my phone. Problem was that I did not buy the phone from Google's web store, but instead from a local retailer. In that case I would need to send in the phone and live without. As you all know, I am diabetic and I regularily check my blood sugar using my mobile phone with Freestyle Libre Bluetooth Low Energy (BLE) sensor. Also changing 2 times phones and copying data around was no option for me. So I looked for other possible solutions.

(Temporarily) Fixing the Issue

So I applied some fix, which was also suggested by members of above forums about #PixelGapGate: Try to fix the gap and check if proximity sensor works again! So I took a hairdryer and a huge amount of analogous books - The Brockhaus Encyclopedia! First I heated up the phone with the hairdryer (this makes the glue smooth) and then placed it between those Brockhaus Encyclopedia books. I put approximate 5 kg of books on top of the display and went for sleep.

On the next day, I rescued the phone and checked the state: The gap was gone! Hurraaa! A quick test with the proximity sensor test app also showed: All went back to normal. 

I used the phone a few days and had no problem with the sensor anmore. All phone calls behaved correctly and the screen only went dark when I put the phone close to my ear.

Time goes by...

Unfortunately, after a week or so, the gap started to open again, so the Brockhaus fix was only temporarily. I repeated the same fix again, just to verify that it is reproducible. As expected: #PixelGapGate and proximity sensor issue are directly related. Searching on Reddit also found the following: "The Gap (gate). The Lie of Google."

I contacted Google Support again, they still offered to replace my phone, but at the moment they are checking if they can replace the phone by sending the new one first (like Amazon normally does).

To conclude everything: Sorry Google, the Gap Gate is a real issue! It is a lie that it does not affect users. Hundreds of users have a gap and hundreds of users have a screen turning black when they make a phone call. This is no good customer support to just say: "Hey all is fine, don't worry!"

You should offer all people an easy replacement of their phone without any bureaucracy. But very important: Before doing that, fix your design! My phone was built (according to serial number) in March, which is 5 months after the first few people recognized the gaps! I would not have expected that a phone manufactured 5 months later to still have the issue.

So please Google: Do something! The Pixel 5 is a great phone, but Gap Gate and Proximity Sensor is a pain! It's easy to fix by repacing the glue, just teach your manufacturer to fix their production!

Update (2021-08-14; may apply to German customers only): Unfortunately, Google is not able to replace a Pixel 5 phone by sending the new one first and waiting for the return (like Amazon), if it was not bought through their web store. So please see this as a warning: Never ever buy a Google Pixel phone or other Google hardware through a local retailer or a web store like Amazon from a third party! If you buy it in the Google Store, they will offer you a direct exchange without having to send in the old phone in first, so you can get the replacement phone first and copy the data e.g. thorugh the USB cable. This would have been important for me as a diabetic, because I use the phone as my blood sugar sensor. Now I have to copy the data 2 times (broken phone => very old Samsung phone or my laptop => new phone).


Use Lucene’s MMapDirectory on 64bit platforms, please!

Don’t be afraid – Some clarification to common misunderstandings

Since version 3.1, Apache Lucene and Solr use MMapDirectory by default on 64bit Windows and Solaris systems; since version 3.3 also for 64bit Linux systems. This change lead to some confusion among Lucene and Solr users, because suddenly their systems started to behave differently than in previous versions. On the Lucene and Solr mailing lists a lot of posts arrived from users asking why their Java installation is suddenly consuming three times their physical memory or system administrators complaining about heavy resource usage. Also consultants were starting to tell people that they should not use MMapDirectory and change their solrconfig.xml to work instead with slow SimpleFSDirectory or NIOFSDirectory (which is much slower on Windows, caused by a JVM bug #6265734). From the point of view of the Lucene committers, who carefully decided that using MMapDirectory is the best for those platforms, this is rather annoying, because they know, that Lucene/Solr can work with much better performance than before. Common misinformation about the background of this change causes suboptimal installations of this great search engine everywhere.

In this blog post, I will try to explain the basic operating system facts regarding virtual memory handling in the kernel and how this can be used to largely improve performance of Lucene (“VIRTUAL MEMORY for DUMMIES”). It will also clarify why the blog and mailing list posts done by various people are wrong and contradict the purpose of MMapDirectory. In the second part I will show you some configuration details and settings you should take care of to prevent errors like “mmap failed” and suboptimal performance because of stupid Java heap allocation.

Virtual Memory[1]

Let’s start with your operating system’s kernel: The naive approach to do I/O in software is the way, you have done this since the 1970s – the pattern is simple: whenever you have to work with data on disk, you execute a syscall to your operating system kernel, passing a pointer to some buffer (e.g. a byte[] array in Java) and transfer some bytes from/to disk. After that you parse the buffer contents and do your program logic. If you don’t want to do too many syscalls (because those may cost a lot processing power), you generally use large buffers in your software, so synchronizing the data in the buffer with your disk needs to be done less often. This is one reason, why some people suggest to load the whole Lucene index into Java heap memory (e.g., by using RAMDirectory).

But all modern operating systems like Linux, Windows (NT+), MacOS X, or Solaris provide a much better approach to do this 1970s style of code by using their sophisticated file system caches and memory management features. A feature called “virtual memory” is a good alternative to handle very large and space intensive data structures like a Lucene index. Virtual memory is an integral part of a computer architecture; implementations require hardware support, typically in the form of a memory management unit (MMU) built into the CPU. The way how it works is very simple: Every process gets his own virtual address space where all libraries, heap and stack space is mapped into. This address space in most cases also start at offset zero, which simplifies loading the program code because no relocation of address pointers needs to be done. Every process sees a large unfragmented linear address space it can work on. It is called “virtual memory” because this address space has nothing to do with physical memory, it just looks like so to the process. Software can then access this large address space as if it were real memory without knowing that there are other processes also consuming memory and having their own virtual address space. The underlying operating system works together with the MMU (memory management unit) in the CPU to map those virtual addresses to real memory once they are accessed for the first time. This is done using so called page tables, which are backed by TLBs located in the MMU hardware (translation lookaside buffers, they cache frequently accessed pages). By this, the operating system is able to distribute all running processes’ memory requirements to the real available memory, completely transparent to the running programs.

Schematic drawing of virtual memory
(image from Wikipedia [1], http://en.wikipedia.org/wiki/File:Virtual_memory.svg, licensed by CC BY-SA 3.0)

By using this virtualization, there is one more thing, the operating system can do: If there is not enough physical memory, it can decide to “swap out” pages no longer used by the processes, freeing physical memory for other processes or caching more important file system operations. Once a process tries to access a virtual address, which was paged out, it is reloaded to main memory and made available to the process. The process does not have to do anything, it is completely transparent. This is a good thing to applications because they don’t need to know anything about the amount of memory available; but also leads to problems for very memory intensive applications like Lucene.

Lucene & Virtual Memory

Let’s take the example of loading the whole index or large parts of it into “memory” (we already know, it is only virtual memory). If we allocate a RAMDirectory and load all index files into it, we are working against the operating system: The operating system tries to optimize disk accesses, so it caches already all disk I/O in physical memory. We copy all these cache contents into our own virtual address space, consuming horrible amounts of physical memory (and we must wait for the copy operation to take place!). As physical memory is limited, the operating system may, of course, decide to swap out our large RAMDirectory and where does it land? – On disk again (in the OS swap file)! In fact, we are fighting against our O/S kernel who pages out all stuff we loaded from disk [2]. So RAMDirectory is not a good idea to optimize index loading times! Additionally, RAMDirectory has also more problems related to garbage collection and concurrency. Because the data residing in swap space, Java’s garbage collector has a hard job to free the memory in its own heap management. This leads to high disk I/O, slow index access times, and minute-long latency in your searching code caused by the garbage collector driving crazy.

On the other hand, if we don’t use RAMDirectory to buffer our index and use NIOFSDirectory or SimpleFSDirectory, we have to pay another price: Our code has to do a lot of syscalls to the O/S kernel to copy blocks of data between the disk or filesystem cache and our buffers residing in Java heap. This needs to be done on every search request, over and over again.

Memory Mapping Files

The solution to the above issues is MMapDirectory, which uses virtual memory and a kernel feature called “mmap” [3] to access the disk files.

In our previous approaches, we were relying on using a syscall to copy the data between the file system cache and our local Java heap. How about directly accessing the file system cache? This is what mmap does!

Basically mmap does the same like handling the Lucene index as a swap file. The mmap() syscall tells the O/S kernel to virtually map our whole index files into the previously described virtual address space, and make them look like RAM available to our Lucene process. We can then access our index file on disk just like it would be a large byte[] array (in Java this is encapsulated by a ByteBuffer interface to make it safe for use by Java code). If we access this virtual address space from the Lucene code we don’t need to do any syscalls, the processor’s MMU and TLB handles all the mapping for us. If the data is only on disk, the MMU will cause an interrupt and the O/S kernel will load the data into file system cache. If it is already in cache, MMU/TLB map it directly to the physical memory in file system cache. It is now just a native memory access, nothing more! We don’t have to take care of paging in/out of buffers, all this is managed by the O/S kernel. Furthermore, we have no concurrency issue, the only overhead over a standard byte[] array is some wrapping caused by Java’s ByteBuffer interface (it is still slower than a real byte[] array, but  that is the only way to use mmap from Java and is much faster than all other directory implementations shipped with Lucene). We also waste no physical memory, as we operate directly on the O/S cache, avoiding all Java GC issues described before.

What does this all mean to our Lucene/Solr application?
  • We should not work against the operating system anymore, so allocate as less as possible heap space (-Xmx Java option). Remember, our index accesses rely on passed directly to O/S cache! This is also very friendly to the Java garbage collector.
  • Free as much as possible physical memory to be available for the O/S kernel as file system cache. Remember, our Lucene code works directly on it, so reducing the number of paging/swapping between disk and memory. Allocating too much heap to our Lucene application hurts performance! Lucene does not require it with MMapDirectory.

Why does this only work as expected on operating systems and Java virtual machines with 64bit?

One limitation of 32bit platforms is the size of pointers, they can refer to any address within 0 and 232-1, which is 4 Gigabytes. Most operating systems limit that address space to 3 Gigabytes because the remaining address space is reserved for use by device hardware and similar things. This means the overall linear address space provided to any process is limited to 3 Gigabytes, so you cannot map any file larger than that into this “small” address space to be available as big byte[] array. And when you mapped that one large file, there is no virtual space (address like “house number”) available anymore. As physical memory sizes in current systems already have gone beyond that size, there is no address space available to make use for mapping files without wasting resources (in our case “address space”, not physical memory!).

On 64bit platforms this is different: 264-1 is a very large number, a number in excess of 18 quintillion bytes, so there is no real limit in address space. Unfortunately, most hardware (the MMU, CPU’s bus system) and operating systems are limiting this address space to 47 bits for user mode applications (Windows: 43 bits) [4]. But there is still much of addressing space available to map terabytes of data.

Common misunderstandings

If you have read carefully what I have told you about virtual memory, you can easily verify that the following is true:
  • MMapDirectory does not consume additional memory and the size of mapped index files is not limited by the physical memory available on your server. By mmap() files, we only reserve address space not memory! Remember, address space on 64bit platforms is for free!
  • MMapDirectory will not load the whole index into physical memory. Why should it do this? We just ask the operating system to map the file into address space for easy access, by no means we are requesting more. Java and the O/S optionally provide the option to try loading the whole file into RAM (if enough is available), but Lucene does not use that option (we may add this possibility in a later version).
  • MMapDirectory does not overload the server when “top” reports horrible amounts of memory. “top” (on Linux) has three columns related to memory: “VIRT”, “RES”, and “SHR”. The first one (VIRT, virtual) is reporting allocated virtual address space (and that one is for free on 64 bit platforms!). This number can be multiple times of your index size or physical memory when merges are running in IndexWriter. If you have only one IndexReader open it should be approximately equal to allocated heap space (-Xmx) plus index size. It does not show physical memory used by the process. The second column (RES, resident) memory shows how much (physical) memory the process allocated for operating and should be in the size of your Java heap space. The last column (SHR, shared) shows how much of the allocated virtual address space is shared with other processes. If you have several Java applications using MMapDirectory to access the same index, you will see this number going up. Generally, you will see the space needed by shared system libraries, JAR files, and the process executable itself (which are also mmapped).

How to configure my operating system and Java VM to make optimal use of MMapDirectory?

First of all, default settings in Linux distributions and Solaris/Windows are perfectly fine. But there are some paranoid system administrators around, that want to control everything (with lack of understanding). Those limit the maximum amount of virtual address space that can be allocated by applications. So please check that “ulimit -v” and “ulimit -m” both report “unlimited”, otherwise it may happen that MMapDirectory reports “mmap failed” while opening your index. If this error still happens on systems with lot’s of very large indexes, each of those with many segments, you may need to tune your kernel parameters in /etc/sysctl.conf: The default value of vm.max_map_count is 65530, you may need to raise it. I think, for Windows and Solaris systems there are similar settings available, but it is up to the reader to find out how to use them.

For configuring your Java VM, you should rethink your memory requirements: Give only the really needed amount of heap space and leave as much as possible to the O/S. As a rule of thumb: Don’t use more than ¼ of your physical memory as heap space for Java running Lucene/Solr, keep the remaining memory free for the operating system cache. If you have more applications running on your server, adjust accordingly. As usual the more physical memory the better, but you don’t need as much physical memory as your index size. The kernel does a good job in paging in frequently used pages from your index.

A good possibility to check that you have configured your system optimally is by looking at both "top" (and correctly interpreting it, see above) and the similar command "iotop" (can be installed, e.g., on Ubuntu Linux by "apt-get install iotop"). If your system does lots of swap in/swap out for the Lucene process, reduce heap size, you possibly used too much. If you see lot's of disk I/O, buy more RUM (Simon Willnauer) so mmapped files don't need to be paged in/out all the time, and finally: buy SSDs.

Happy mmapping!



The Policeman’s Horror: Default Locales, Default Charsets, and Default Timezones

Time for a tool to prevent any effects coming from them!

Did you ever try to run software downloaded from the net on a computer with Turkish locale? I think most of you never did that. And if you ask Turkish IT specialists, they will tell you: “It is better to configure your computer using any other locale, but not tr_TR”. I think you have no clue what I am talking about? Maybe this article gives you a hint: “A Cellphone’s Missing Dot Kills Two People, Puts Three More in Jail”.

What you see in lots of software is a so-called case-insensitive matching of keywords like parameter names or function names. This is implemented in most cases by lowercasing or upper-casing the input text and compare it with a list of already lowercased/uppercased items. This works in most cases fine, if you are anywhere in the world, except Turkey! Because most programmers don’t care about running their software in Turkey, they do not test their software under the Turkish locale.

But what happens with the case-insensitive matching if running in Turkey? Let’s take an example:

User enters “BILLY” in the search field of you application. The application then uses the approach presented before and lower-cases “BILLY” and then compares it to an internal table (e.g. our search index, parameter table, function table,...). So we search in this table for “billy”. So far so good, works perfect in USA, Germany, Kenia, almost everywhere - except Turkey. What happens in the Turkish locale when we lowercase “BILLY”? After reading the above article, you might expect it: The “BILLY”.toLowerCase() statement in Java returns “bılly” (note the dot-less i: 'ı' U+0131). You can try this out on your local machine without reconfiguring it to use the Turkish locale, just try the following Java code:
assertEquals(“bılly”, “BILLY”.toLowerCase(new Locale(“tr”,“TR”)));
The same happens vice versa, if you uppercase a ‘i’, it gets I with dot (‘İ’ U+0130). This is really serious, million lines of code out there in Java and other languages don’t take care that the String.toLowerCase() and String.toUpperCase() methods can optionally take a defined Locale (more about that later). Some examples from projects I am involved in:

  • Try to run an XSLT stylesheet using Apache XALAN-XSLTC (or Java 5’s internal XSLT interpreter) in the Turkish locale. It will fail with “unknown instruction”, because XALAN-XSLTC compiles the XSLT to Java Bytecode and somehow lowercases a virtual machine opcode before compiling it with BCEL (see XALANJ-2420, BCEL bug #38787).
  • The HTML SAX parser NekoHTML uses locale-less uppercasing/lowercasing to normalize charset names and element names. I opened a bug report (issue #3544334).
  • If you use PHP as your favourite scripting language, which is not case sensitive for class names and other language constructs, it will throw a compile error once you try to call a function with an “i” in it (see PHP bug #18556). Unfortunately it is unlikely that this serious bug is fixed in PHP 5.3 or 5.4!

The question is now: How to solve this?

The most correct way to do this is to not lowercase at all! For comparing case insensitive, Unicode defines “case folding”, which is a so-called canonical form of text where all upper/lower case of any character is normalized away. Unfortunately this case folded text may no longer be readable text (this depends on the implementation, but in most cases it is). It just ensures, that case-folded text can be compared to each other in a case-insensitive way. Unfortunately Java does not offer you a function to get this string, but ICU-4J can do (see UCharacter#foldCase). But Java offers something much better: String.equalsIgnoreCase(String), which internally handles case folding! But in lots of cases you cannot use this fantastic method, because you want to lookup such strings in a HashMap or other dictionary. Without modifying HashMap to use equalsIgnoreCase, this would never work. So we are back at lower-casing! As mentioned before, you can pass a locale to String.toLowerCase(), so the naive approach would be to tell Java, that we are in the US or using the English language: String.toLowerCase(Locale.US) or String.toLowerCase(Locale.ENGLISH). This produces identical results but is still not consistent. What happens if the US government decides to lowercase/uppercase like in Turkey? -- OK, don’t use Locale.US (this is also too US-centric). Locale.ENGLISH is fine and very generic, but languages also change over the years (who knows?), but we want to have it language invariant! If you are using Java 6, there is a much better constant: Locale.ROOT -- You should use this constant for our lowercase example: String.toLowerCase(Locale.ROOT).
You should start now and do a global search/replace on all your Java projects (if you do not rely on language specific presentation of text)! REALLY!
String.toLowerCase is not the only example of “automatic default locale usage” in the Java API. There are also things like transforming dates or numbers to strings. If you use the Formatter class, and you run it somewhere in another country, String.format(“%f”, 15.5f) may not always use a period (‘.’) as decimal separator; most Germans will know this. Passing a specific locale here helps in most cases. If you are writing a GUI in English language, pass Locale.ENGLISH everywhere, otherwise text output of numbers or dates may not match the language of your GUI! If you want Formatter to behave in a invariant way, use Locale.ROOT, too (then it will for sure format numbers with period and no comma for thousands, just like Float.toString(float) does).

A second problem affecting lot’s of software are two other system-wide configurable default settings: default charset/encoding and timezone. If you open a text file with FileReader or convert an InputStream to a Reader with InputStreamReader, Java assumes automatically, that the input is in the default platform encoding. This may be fine, if you want the text to be parsed by the defaults of the operating system -- but if you pass a text file together with your software package (maybe as resource in your JAR file) and then accidentally read it using the platform’s default charset... it’ll break your app! So my second recommendation:
Always pass a character set to any method converting bytes to strings (like InputStream <=> Reader, String.getBytes(),...). If you wrote the text file and ship it together with your app, only you know its encoding!
For timezones, similar examples can be found.

How this affects Apache Lucene!

Apache Lucene is a full-text search engine and deals with text from different languages all the time; Apache Solr is a enterprise search server on top of Lucene and deals with input documents in lots of different charsets and languages. It is therefore essential for a search library like Lucene to be as most independent from local machine settings as possible. A library must make it explicit what input it wants to have. So we require charsets and locales in all public and private APIs (or we only take e.g. java.io.Reader instead of InputStream if we expect text coming in), so the user must take care.

Robert Muir and I reviewed the source code of Apache Lucene and Solr for the coming version 4.0 (an alpha version is already available on Lucene’s homepage, documentation is here). We did this quite often, but whenever a new piece of code is committed to the source tree, it may happen that undefined locales, charsets, or similar things appear again. In most cases it is not the fault of the committer, this happens because auto-complete in IDE automatically lists possible methods and parameters to the developer. Often you select the easiest variant (like String.toLowerCase()).

Using default locales, charsets and timezones are in my opinion a big design issue in programming languages like Java. If there are locale-sensitive methods, those methods should take a locale, if you convert a byte[] stream to a char[] stream, a charset must be given. Automatically falling back to defaults is a no-go in the server environment. 
If a developer is interested in using the default locale of the user’s computer, he can always explicitely give the locale or charset. In our example this would be String.toLowerCase(Locale.getDefault()). This is more verbose, but it is obvious what the developer intends to do.

My proposal is to ban all those default charset and locale methods / classes in the Java API by deprecating them as soon as possible, so users stop using them implicit!

Robert’s and my intention is to automatically fail the nightly builds (or compilation on the developer’s machine) when somebody uses one of the above methods in Lucene’s or Solr’s source code. We looked at different solutions like PMD or FindBugs, but both tools are too sloppy to handle that in a consistent way (PMD does not have any “default charset” method detection and Findbugs has only a very short list of method signatures). In addition, both PMD and FindBugs are very slow and often fail to correctly detect all problems. For Lucene builds we only need a tool, that looks into the byte code of all generated Java classes of Apache Lucene and Solr, and fails the build if any signature that violates our requirements is found.

A new Tool for the Policeman

I started to hack a tool as a custom ANT task using ASM 4.0 (Lightweight Java Bytecode Manipulation Framework). The idea was to provide a list of methods signatures, field names and plain class names that should fail the build, once bytecode accesses it in any way. A first version of this task was published in issue LUCENE-4199, later improvements was to add support for fields (LUCENE-4202) and a sophisticated signature expansion to also catch calls to subclasses of the given signatures (LUCENE-4206).

In the meantime, Robert worked on the list of “forbidden” APIs. This is what came out in the first version:
Using this easily extend-able list, saved in a text file (UTF-8 encoded!), you can invoke my new ANT task (after registering it with <taskdef/>) very easy -- taken from Lucene/Solr’s build.xml:
<taskdef resource="lucene-solr.antlib.xml">
    <pathelement location="${custom-tasks.dir}/build/classes/java" />
    <fileset dir="${custom-tasks.dir}/lib" includes="asm-debug-all-4.0.jar" />
  <classpath refid="additional.dependencies"/>
  <apiFileSet dir="${custom-tasks.dir}/forbiddenApis">
    <include name="jdk.txt" />
    <include name="jdk-deprecated.txt" />
    <include name="commons-io.txt" />
  <fileset dir="${basedir}/build" includes="**/*.class" />
The classpath given is used to look up the API signatures (provided as apiFileSet). Classpath is only needed if signatures are coming from 3rd party libraries. The inner fileset should list all class files to be checked. For running the task you also need asm-all-4.0.jar available in the task’s classpath.

If you are interested, take the source code, it is open source and released as part of the tool set shipped with Apache Lucene & Solr: Source, API lists (revision number 1360240).

At the moment we are investigating other opportunities brought by that tool:
  • We want to ban System.out/err or things like horrible Eclipse-like try...catch...printStackTrace() auto-generated Exception stubs. We can just ban those fields from the java.lang.System class and of course, Throwable#printStackTrace().
  • Using optimized Lucene-provided replacements for JDK API calls. This can be enforced by failing on the JDK signatures.
  • Failing the build on deprecated calls to Java’s API. We can of course print warnings for deprecations, but failing the build is better. And: We use deprecation annotations in Lucene’s own library code, so javac-generated warnings don’t help. We can use the list of deprecated stuff from JDK Javadocs to trigger the failures.
I hope other projects take a similar approach to scan their binary/source code and free it from system dependent API calls, which are not predictable for production systems in the server environment.

Thanks to Robert Muir and Dawid Weiss for help and suggestions!

EDIT (2015-03-14): On 2013-02-04, I released the plugin as Apache Ant, Apache Maven and CLI task on Google Code; later on 2015-03-14, it was migrated to Github. The project URL is: https://github.com/policeman-tools/forbidden-apis. The tool is available to your builds using Maven/Ivy through Maven Central and Sonatype repositories. Nightly snapshot builds are done by the Policeman Jenkins Server and can be downloaded from the Sonatype Snapshot repository.


Is your IndexReader atomic? - Major IndexReader refactoring in Lucene 4.0

Note: This blog post was originally posted on the SearchWorkings website.

Since Day 1 Lucene exposed the two fundamental concepts of reading and writing an index directly through IndexReader & IndexWriter. However, the API didn’t reflect reality; from the IndexWriter perspective this was desirable but when reading the index this caused several problems in the past. In reality a Lucene index isn’t a single index while logically treated as a such. The latest developments in Lucene trunk try to expose reality for type-safety and performance, but before I go into details about Composite, Atomic and DirectoryReaders let me go back in time a bit.

Since version 2.9 / 3.0 Lucene started to move away from executing searches directly on the top-level IndexReaders towards a per-segment orientation. As Simon Willnauer already explained in his blog entry, this lead to fact that optimizing an index is no longer needed to optimize searching performance. In fact, optimizing would slow your searches down, as after optimizing, all file system and Lucene-internal index caches get invalidated.

A standard Lucene index consists of several so-called segments, which are themselves fully-functional Lucene indexes. During indexing, Lucene writes new documents into separate segments and, once there are too many segments, they are merged. (see Mike McCandless’ blog: Visualizing Lucene's segment merges):

Prior to Lucene 2.9, despite consisting of multiple underlying segments, the segments were treated as though they were a single big index. Since then, Lucene has shifted towards a per-segment orientation. By now almost all structures and components in Lucene operate on a per-segment basis; among others this means that Lucene only loads actual changes on reopen, instead of the entire index. From a users perspective it might still look like one big logical index but under the hood everything works per-segment like this (simplified) IndexSearcher snippet shows:
 public void search(Weight weight, Collector collector) throws IOException {  
  // iterate through all segment readers & execute the search  
  for (int i = 0; i < subReaders.length; i++) {  
   // pass the reader to the collector  
   collector.setNextReader(subReaders[i], docStarts[i]);  
   final Scorer scorer = ...;  
   if (scorer != null) { // score documents on this segment  
However, the distinction between a logical index and a segment wasn’t consistently reflected in the code hierarchy. In Lucene 3.x, one could still execute searches on a top-level (logical) reader, without iterating over its subreaders. Doing so could slowdown your searches dramatically provided your index consisted of more than one segment. Among other reasons, this was why ancient versions of Lucene instructed users to optimize the index frequently.

Let me explain the problem in a little more detail. An IndexReader on top of a Directory is internally a MultiReader on all enclosing SegmentReaders. If you ask a MultiReader for a TermEnum or the Postings it executes an on-the-fly merge all of all subreader’s terms or postings data respectively. This merge process uses priority queues or related data structures leading to a serious slowdown depending on the number of subreaders.

Yet, even beyond these internal limitations using SegmentReaders in combination with MultiReaders can influence higher-level structures in Lucene. The FieldCache is used to uninvert the index to allow sorting of search results by indexed value or Document / Value lookups during search. Uninverting the top-level readers leads to duplication in the FieldCache and essentially multiple instances of the same cache.

Type-Safe IndexReaders in Lucene 4.0

From day one Lucene 4.0 was designed to not allow retrieving of terms and postings data from “composite” readers like MultiReader or DirectoryReader (which is the implementation that is returned for on-disk indexes, if you get a reader from IndexReader.open(Directory)). Initial versions of Lucene trunk simply threw an UnsupportedOperationException when you tried to get instances of Fields, TermsEnum, or DocsEnum from a non SegmentReader. Because of the missing type safety, one couldn’t rely on the ability to get postings from the IndexReader unless manually checking if it was composite or atomic.

LUCENE-2858 is one of the major API changes in Lucene 4.0, it completely changes the Lucene client code “perspective” on indexes and its segments. The abstract class IndexReader has been refactored to expose only essential methods to access stored fields during display of search results. It is no longer possible to retrieve terms or postings data from the underlying index, not even deletions are visible anymore. You can still pass IndexReader as constructor parameter to IndexSearcher and execute your searches; Lucene will automatically delegate procedures like query rewriting and document collection atomic subreaders.

If you want to dive deeper into the index and want to write own queries, take a closer look at the new abstract sub-classes AtomicReader and CompositeReader:

AtomicReader instances are now the only source of Terms, Postings, DocValues and FieldCache. Queries are forced to execute on a Atomic reader on a per-segment basis and FieldCaches are keyed by AtomicReaders. It’s counterpart CompositeReader exposes a utility method to retrieve its composites. But watch out, composites are not necessarily atomic. Next to the added type-safety we also removed the notion of index-commits and version numbers from the abstract IndexReader, the associations with IndexWriter were pulled into a specialized DirectoryReader. Here is an “example” executing a query in Lucene trunk:
 DirectoryReader reader = DirectoryReader.open(directory);  
 IndexSearcher searcher = new IndexSearcher(reader);  
 Query query = new QueryParser("fieldname", analyzer).parse(“text”);  
 TopDocs hits = searcher.search(query, 10);  
 ScoreDoc[] docs = hits.scoreDocs;  
 Document doc1 = searcher.doc(docs[0].doc);  
 // alternative:  
 Document doc2 = reader.document(docs[1].doc);  
Does that look familiar? Well, for the actual API user this major refactoring doesn’t bring much of a change. If you run into compile errors related to this change while upgrading you likely found a performance bottleneck.

Enforcing Per-Segment semantics in Filters

If you have more advanced code dealing with custom Filters, you might have noticed another new class hierarchy in Lucene (see LUCENE-2831): IndexReaderContext with corresponding Atomic-/CompositeReaderContext. This has been added quite a while ago but is closely related to atomic and composite readers.

The move towards per-segment search Lucene 2.9 exposed lots of custom Queries and Filters that couldn't handle it. For example, some Filter implementations expected the IndexReader passed in is identical to the IndexReader passed to IndexSearcher with all its advantages like absolute document IDs etc. Obviously this “paradigm-shift” broke lots of applications and especially those that utilized cross-segment data structures (like Apache Solr).

In Lucene 4.0, we introduce IndexReaderContexts “searcher-private” reader hierarchy. During Query or Filter execution Lucene no longer passes raw readers down Queries, Filters or Collectors; instead components are provided an AtomicReaderContext (essentially a hierarchy leaf) holding relative properties like the document-basis in relation to the top-level reader. This allows Queries & Filter to build up logic based on document IDs, albeit the per-segment orientation.

Can I still use top-level readers?

There are still valid use-cases where Top-Level readers ie. “atomic views” on the index are desirable. Let say you want to iterate all terms of a complete index for auto-completion or facetting, Lucene provides utility wrappers like SlowCompositeReaderWrapper emulating an AtomicReader. Note: using “atomicity emulators” can cause serious slowdowns due to the need to merge terms, postings, DocValues, and FieldCache, use them with care!
 Terms terms = SlowCompositeReaderWrapper.wrap(directoryReader).terms(“field”);  
Unfortunately, Apache Solr still uses this horrible code in a lot of places, leaving us with a major piece of work undone. Major parts of Solr’s facetting and filter caching need to be rewritten to work per atomic segment! For those implementing plugins or other components for Solr, SolrIndexSearcher exposes a “atomic view” of its underlying reader via SolrIndexSearcher.getAtomicReader().

If you want to write memory-effective and fast search applications (that do not need those useless large caches like Solr uses), I would recommend to not use Solr 4.0 and instead write your search application around the new Lucene components like the new facet module and SearcherManager!


JDK 7u2 released - How about Linux and other operating systems?

Last week, Oracle released Java 7 Update 2, another milestone. This release included, of course, all the fixes that were already in Update 1 (see also Oracle's page), especially those affecting Apache Lucene and Solr. Since my last post on this blog, I was investigating what changed and how other operating systems like Ubuntu/Redhat Linux and FreeBSD are supported (warning: sarcasm alert!)


First of all, you can of course download the official Linux packages from Oracle. But those are not automatically updated when a new release comes out. So most Linux users prefer to use the automatic update of their operating system. Unfortunately, at the beginning of this month, Ubuntu wrote in an announcement:
As of August 24th 2011, we no longer have permission to redistribute new Java packages as Oracle has retired the "Operating System Distributor License for Java".
Oracle has published an advisory about security issues in the version of Java we currently have in the partner archive. Some of these issues are currently being exploited in the wild. Due to the severity of the security risk, Canonical is immediately releasing a security update for the Sun JDK browser plugin which will disable the plugin on all machines. This will mitigate users' risk from malicious websites exploiting the vulnerable version of the Sun JDK.
In the near future (exact date TBD), Canonical will remove all Sun JDK packages from the Partner archive. This will be accomplished by pushing empty packages to the archive, so that the Sun JDK will be removed from all users machines when they do a software update. Users of these packages who have not migrated to an alternative solution will experience failures after the package updates have removed Oracle Java from the system.
If you are currently using the Oracle Java packages from the partner archive, you have two options: 
  • Install the OpenJDK packages that are provided in the main Ubuntu archive (openjdk-6-jdk or openjdk-6-jre for the virtual machine).
  • Manually install Oracle's Java software from their web site.
Unfortunately this means that we will never get an official Ubuntu package for Java 7! What are all these security bugs suddenly heavily exploited in the wild?

OK, the latest version of Ubuntu's JDK 6 was Update 26, so what security fixes came in Update 27, Update 29, and Update 30? I inspected the changelogs shipped with the openjdk6 and openjdk7 packages, which are now the "official Java support" for Ubuntu (and also Redhat) but there is something wrong: It's not even OpenJDK! OpenJDK is still on build 147 (as of their official download page) - which is the original Java 7 release that broke Apache Lucene and Apache Solr with index corru(m)ptions and SIGSEGVs. This means no Linux user can run our full text search engine on Linux, because it SIGSEGVs shortly after starting? But thats not what the Ubuntu package contains: What Ubuntu "sells" as OpenJDK is indeed a strange product named "IcedTea" - wtf is that?

IcedTea 2.0 was released on October 19, 2011 with a long ist of security fixes! But the ubuntu download still has the famous build number 147 in its version number: 7~b147-2.0-1ubuntu2 - how does this fit together? Redhat and Ubuntu both sell another product "IcedTea", but labeled as "OpenJDK"! As this is so widely used, this seems to lead to the fact that Oracle does not seem to update their original OpenJDK release anymore. The IcedCreamTea seems to be the "new" offical release? What about all non-Linux operating systems like FreeBSD (see below)? I think that's a bad idea, because it confuses users. Also, when reviewing fixed bugs in official Oracle releases you get an update number (current is Java 6u30 or Java 7u2), but with OpenJDK (sorry IcedTea) Linux packages you get version numbers that don't tell you any relation to Oracle's releases - useless!

In fact to come back to OpenJDK 6 package in Ubuntu: If you install this replacement package on your machine according to the howto on the Ubuntu webpage for the good sun-java6 package (which is u26) - you get an older hotspot version (hotspot version numbers are the only things that you can read and compare from "java -version" output)! Something around official Oracle JDK 6u24 - so in fact you get an older version - that's no upgrade, that's a downgrade! For OpenJDK 7 you get something like Oracle's JDK 7u0 but with thousands of patches applied.

To come back to the Lucene/Solr bugs: Yes, they are fixed in this mysterious OpenJDK/IcedTea 7 release, the long list of changes verifies that. If you download the wrong-named OpenJDK 7 package with the horrible build number 147 (openjdk-7-jdk 7~b147-2.0-1ubuntu2), you will not crash your JVM with Apache Solr and you can try it out with the new garbage collector (G1) and some performance improvements (indeed Lucene tests run faster with Java 7 on my box). It looks very stable.

The second shock on this day occurred when I was searching for the famous Lucene bugs in the list of fixed IcedTea issues. They appear there as one of the horrible security bugs with CVE numbers assigned (CVE-2011-3558and others)! This also explains why Oracle made the orginal porter stemmer bug report hidden! They also appear in the openjdk-6 Ubuntu packages - as horrible security bugs, too. So Ubuntu patched the antique 6u24 and older versions with patches for Java 6u29 [they also patched u20, where the bug was not existent, see here] - thats really strange. And again, confuses users!

And finally: The Lucene bugs seem to be one of the reasons to delete the sun-java6 packages from Ubuntu Partner repository in the future. How funny is this? Does anybody have an exploit except starting Apache Solr with the default configuration and -XX:+AggressiveOpts enabled? OK, it is really a security issue for users working on your Solr Search web frontend and suddenly produce corru(m)pt indexes on your machine! They might not find anything after this disaster.


What about FreeBSD? It looks much worse: There is no new update for OpenJDK available until today, so you cannot use it to build a new Port. The Jenkins Server at Apache, running the Lucene tests, is still running the original OpenJDK7 b147 build that I patched during the summer to work around the Java 7 bugs. I think the problem is here, that Oracle no longer releases OpenJDK builds, because IcedTea is there. But IcedTea is Linux only!

Please note: This blog post is partially a little bit sarcastic, I just tell my feelings about the whole Linux-Ubuntu-OpenJDK-FreeBSD issue.

A short side note: PANGAEA now runs very stable and horrible fast for some operations with Lucene 3.5 (no Apache Solr) and official Oracle JDK 7u2 on Solaris x64 (MMapDirectory of course)! I wish you merry Xmas and a happy new year!