Mini course and not-so-mini course on semiclassics on Youtube

Last year I co-organized a workshop at the Lorentz Center with Constanza Rojas Molina, Giuseppe De Nittis and Marcello Seri. We had invited experts from various fields of mathematical physics and talk about conductivity properties in metals, e. g. from the random operators community, non-commutative geometry, semiclassics and classical dynamical systems. Our hope was that the workshop was an opportunity for cross pollination.

For the workshop I have recorded one of the four mini courses and published it on my Youtube channel. Since it turned out quite long, I have also created a second playlist with a shorter version.

The End of Moore's Law and What It Means for Software

A few weeks ago I have seen a talk given by Bob Colwell, one of the brains behind Intel's Pentium Pro whose architecture has evolved into the CPUs that power Macs today). The gist is that the exponential growth of the number of transistorsMoore's Law — will taper off and we will enter a new age of computing where innovation can no longer be fueled by ever increasing processing power. At the same time, the startup Upthere made its presence publicly known. Backed by industry veterans such as Bertrand Serlet, Chris Bourdon and Alex Kushnir, it has received sizable funding from venture capitalists. Their goal is to usher in a new era of cloud computing where all of your data is in the cloud, and storage on the device is at best a cache. Upthere's admittedly very ambitious goal is to offer an end-to-end solution from server hardware to end user software.

These two seemingly disparate threads led me to the question What will software look like in the future?

In the past hardware advances have outpaced software advances, and while there were a few pieces of software (such as OS X initially) which required more performance than hardware could deliver at the time, nowadays new platforms such as watchOS and tvOS are designed specifically to work well with the hardware available at the time. What is more, the processing speed of the current breed of iOS devices is toe-to-toe with Apple's notebook offerings.

Once there are no more easy gains to be had through advances in chip manufacturing technology, you either have to improve the architecture of the chips or improve the software. From that point on, performance gains will be much harder to come by, and advancements from better software will become increasingly important. What kind of trends will likely play a more prominent role as exponential growth of chip complexity tapers out?

Specialized hardware integrated with software

While specialized hardware has been around literally since the beginning of computing, the cost-benefit analysis was usually skewed towards general purpose hardware. It was simply easier and cheaper to wait for one or two iterations of general purpose hardware rather than design specialized hardware (which is manufactured at a smaller scale) and wait for software to take advantage of it.

Ever since the advent of the smartphone, specialized hardware for things like image processing and encryption has gained more importance, albeit for the sake of power efficiency. But also in other areas such as storage can one find specialized SoCs from Intel, Annapurna Labs and others that are optimized for specific tasks. Once CPU and GPU performance levels out, the prospect of dedicating specialized hardware to certain tasks becomes more and more appealing. (This is another reason why I think Intel will be in big trouble in the consumer space: they are not able to offer the same level of diversity the ARM ecosystem offers.)

Software, of course, needs to take advantage of custom hardware, so companies which make their own hard- and software are at an advantage. Here, one should not just think of Apple, but also of Amazon, Backblaze, Facebook, Google and Oracle who are all designing their own custom hardware to run their software.

Cross-device and cloud-based computing

Speaking of the datacenter, many future pieces of software will integrate the cloud in a smart way. By this I do not refer to cloud document storage, but cloud-based processing of your data. One simple example is cloud-based indexing of your files: Google's Photos service not only processes the image file's metadata, but uses sophisticated image recognition algorithms to analyze what is actually in the photo. Even if the algorithms could run on your Mac, iPhone or iPad, the algorithms would have to be optimized to work within the limited power, performance and memory limits that are imposed upon the system by the hardware.

Software running on dedicated and specialized server hardware would not be subject to such limitations, and instead, it could be designed to only do, say, image processing very, very well. Software engineers could implement the much more advanced algorithms to take on the heavy load of certain computing jobs. “The application” is designed from the start to be split up into several parts that run across several devices in concert. The Apple Watch can be seen through this lens: The Watch has limited compute power and no dedicated LTE connection, it has to outsource those tasks to an iPhone. And while the Watch's reliance on an iPhone is a drawback, without it the Apple Watch would not have been a viable product with current technology.

While in some cases this is done out of necessity, there are cases when a cloud element actually improves efficiency. Podcast clients such as Overcast and many RSS readers do not let the device part of the app crawl RSS feeds individually, but instead have dedicated servers do the job for them. Tens and hundreds of thousands of requests for each individual instance of the app are replaced by one, and the devices talk to one central server instead.

Such integration across several devices is not just limited to what one may call “consumption of content” (a term that is often used disparagingly), but also features in services like Adobe CC. Once you subscribe, you have access to their whole app portfolio and the community components such as Bēhance. The apps are designed to work with one another, you can start a drawing in Sketch and continue it on Illustrator. After you are done, you can put it into your digital portfolio or solicit feedback from others.

A focus on software quality and stability

The rapid release cycle of hardware forced the software divisions of Apple and Google as well as software vendors to keep pace. While the criticism of some commentators who lamented the state of Apple's recent software was overblown, software that runs on “computers” is very different from software that runs the various controllers and gadgets in a car or an airplane. People expect their car to work, and would not want to accept if their “radio app crashed”.

Longer release cycles, more modern programming languages and better software frameworks could pave the way for software quality and stability being a higher priority. The effects would initially be subtle, but add up over time: Every crash or bug gnaws at the bond of trust between user and device, so less crashes mean people can place more trust into their device. And for certain tasks, you have to aim higher — if Apple and Google really do end up building the rumored cars for consumers, self-driving or not, they better raise the bar for software quality and stability.

Moreover, many of the hard problems are UI and UX problems: How are certain ideas and paradigms implemented, and how are new features exposed to the user without overwhelming him or her in the process? Also here longer development cycles could allow for more thought to go into the design as well as give software engineer time to tackle hard problems (such as getting the replacement for mDNSresponder right or implementing a new filesystem) which cannot be implemented within the 6-8 month window they currently have.

Slower progress: the new normal

The breakneck speed of the computing industry is an outlier, other industries are moving at a much slower pace. A 5-year-old car is not only acceptable, it is in all likelihood still comparable to its younger 2015 sibling. The horsepower has not doubled or quadrupled, fuel economy is not through the roof, and all the controls are essentially still the same. You would have to go back further in time to have a markedly different experience. Compare a 2010 iPad to a 2014 iPad Air 2 or a 2015 iPad Pro: the difference is night and day.

While it is understandable that some would bemoan the end of rapid performance and battery life improvements — and all the benefits that come with them, that does not mean that our experience has to evolve that much more slowly. Instead other trends such as abundant availability of networking, the ability to cheaply put full-fledged computers into more and more every-day objects will continue to fundamentally change the world. (A new Raspberry Pi Zero costs just $5, and that includes a copy of the MagPi magazine.) I am optimistic that software engineers will rise to the challenge.

The Future of Apple Operating Systems and Platforms

This blog post appeared as a feature article on MacNN.

In 2001, Apple introduced OS X -- a completely revamped and UNIX-based operating system which gave the platform some much-needed technologies to the Mac that Mac OS couldn't. It represented a clean break from Mac OS 9, and it is what has kept me in the Apple camp. While OS X -- with its children and grandchildren OSes iOS, watchOS and most recently tvOS -- still forms the basis of Apple's OS strategy, given its age of 26 years (counting from the first public release of NeXTSTEP, which was fused with some Mac OS technologies to give birth to OS X) I was thinking: what would replace Apple's current OSes?

Apple currently maintains four different platforms: the mouse-based Mac, the touch-based iOS, watchOS and the remote control-based tvOS. While iOS was born of OS X, all other platforms are in fact the "children" of iOS. Despite their different outward appearances, they share a lot of technologies under the hood, such as a common kernel (up to platform-specific optimizations) and certain APIs.

Looking 10 years ahead, what is next? When you look at the classic Mac OS to Mac OS X transition, there were a number of technological factors that forced this change: OS X had features that were necessary, and vital to the survival of the platform. A few of these features included pre-emptive multitasking, full memory protection, and a modern networking stack. And, of course, a more modern programming language with a more modern set of APIs. So what are areas where Apple's platforms are lacking today? The best way to explore this is to find out which fundamental assumptions have changed.

Support multiple platforms with one OS

In 2015 alone, Apple will have released two new platforms to the public, and if the persistent rumors about Apple's foray into the car industry have any sort of truth to them, they won't be the last. However, given that with few exceptions tvOS's APIs are identical with those of iOS, developers will hit the ground running.

Under the hood, all of Apple's operating systems are based on versions of the same kernel, and lower layers — just like Linux powers everything from smartphones to super computers. Apart from platform-specific optimizations, what is different on each of them are the APIs. If Apple wants to make it easier for itself to nurture that many platforms, and allow developers to leverage their knowledge, it is natural to expect a convergence of Apple's APIs across platforms -- only those that need to be different should be different.

More unified Swift-native APIs

I think the transition to Swift is perfectly timed: Apple claims it has “designed [Swift] to scale from ‘hello, world’ to an entire operating system”. That is a pretty tall order for a programming language. While generally the reception amongst the developer community has been positive, there are a few areas where Swift regresses compared to Objective C. Be that as it may, Swift is here to stay, and unless something unforeseen happens, it will eventually replace Objective C. Consequently, we should eventually see Swift-only, or rather Swift-native APIs.

Thanks to the particular way Swift has been implemented (most notably, it shares a runtime with Objective C), developers can call Objective C APIs from Swift, and hence, Apple does not have to replace all of the functionality of its Objective C-based APIs, such as Cocoa, in one go. Neither Cocoa nor Cocoa Touch were designed from the start to be modular, or allow reuse across multiple platforms. With the transition to Swift-native APIs, though, Apple will have the chance to start with a clean slate, and engineer these APIs with the assumption in mind that they will run on different platforms. I reckon the first few APIs may appear within the next 2, 3 years, after Swift as a language has matured a little.

From the ubiquity of networking to the ubiquity of the cloud

When transitioning from classic Mac OS to Mac OS X, Apple went from an operating system that was initially designed to be used stand-alone to an OS that was designed to power computers which were always connected to a network. However, even then, your data was stored on a single machine, be it the user's desktop or sometimes also a server. Clearly, we have entered a stage where this fundamental assumption is breaking down: people own multiple devices, and work on the same data simultaneously. Instead, your data gets synced over “the cloud,” and “the cloud” no longer is one server your machine talks to, but a service which consists of many moving pieces that you as a user have no control over.

All current syncing and snapshotting solutions (such as Time Machine, iCloud, Dropbox or Synology's Cloud Station) are tacked onto a filesystem that is old, creaky, and does not check the consistency of its data. We all have multiple machines, and we would like to be able to access all of our data on all of them (unless we decide otherwise).

A new distributed filesystem

That is why the next big thing on my list is a new, distributed file system. HFS+ is based on 30-year old technology, and ripe to be replaced. Unfortunately, waiting for a new filesystem feels like waiting for Godot, and the "strategy tax" for Apple has been mounting.

Basically, you would like a filesystem that is based on the assumption that you sync your data to machines of different types. While distributed filesystems already exist, e. g. DragonflyBSD's HAMMER and HAMMER2 or Backblaze's Vaults for cloud storage, none of them are capable of running on the full gamut of devices, from the Apple Watch to Macs. Clearly, no such filesystem exists today, and perhaps a “multi-tiered” filesystem is necessary where not all features are enabled on all platforms.

Built-in data integrity

I love ZFS, or rather, the basic ideas underlying ZFS: you use checksums to be able to detect whether your data has been changed inadvertently by cosmic rays or some bug. This has to be the starting point. Given the amount of storage, it is very likely that you will have random bit flips in your hard drives, RAM or SSD.

As the expected number of bit flips scales at least linearly with capacity, the industry will have to adopt end-to-end error correction in mainstream computing devices at one point. The fact is that we will have many copies of our data stored on a multitude of devices and the cloud, being sure that the data stored is identical on all devices has to be a central tenet of the new filesystem. As data integrity via checksums is usually determined on the block rather than the file level, this would also allow for more efficient syncing mechanisms that use our bandwidth more frugally.

Solid state-based storage

A second fundamental assumption that can be made is that all future devices store data on SSDs. This is important, as SSDs are highly complex computers in their own right, complete with their own miniature operating system, filesystem, and so on, but the computer's operating system is oblivious to the inner workings and makes no use to leverage specific capabilities.

Certain features of modern filesystems, such as copy-on-write are implemented in any SSD because of the way SSDs work. So if the OS and SSD were to collaborate, a new “filesystem” could inherit copy-on-write from the SSD and would not have to implement it a second time by itself. In addition, tasks surrounding data integrity could be managed by the SSD controller with specialized hardware, enabling this feature on much more anemic hardware.

Frequent and live updates

Another area where a modern filesystem could enable great, user-facing functionality is updates: software and the OS get updated more and more quickly. Installing updates even now is a hassle, because the OS cannot update apps that are running, or update the kernel while keeping everything running. More modern filesystems (in conjunction with functionality added to the kernel) would allow live updates even to the OS (Linux already supports this), and hopefully in the future users would not even realize updates for the most part except by what has noticeably changed. This does beg the question, however, of whether users will want to be advised of what is about to happen, or just accept the changes from “on high.”

Automated file management

Perhaps the most difficult challenge is to add more automation to file management. If you look towards iOS in particular, it is clear that Apple hasn't cracked this problem yet (nor anyone else for that matter). Most people are horrible at naming files (and usually do not add their own metadata), and even to someone like me who tries to put a lot of effort into it, it is a chore to rename the new pdf files I have downloaded so that they are easily searchable. While this is something that would benefit from a more modern filesystem, the most challenging aspect is arguably the user-facing portion. How much should we automate in the first place? How are relations between files displayed to the user? What are sensible default behaviors, and how flexible should the system be?

Apple's first stab at a new user data management was to strictly "silo" it on iOS (and to some degree also on OS X), an experiment that failed. While siloing makes sense for applications such as iTunes, it was not a good fit for other use cases, because it made it impossible for users to work on one file with several different applications. Moreover, the context -- the relations between files -- were impossible to represent. It is norm rather than the exception that files of many different types are involved in a single project, e.g. images, spreadsheets, Word documents and pdfs (in addition to emails and chats).

What is more, Spotlight's current indexing algorithms are quite poor compared to a search engine like Google. It cannot extract title and author from a pdf file (unless this is included as metadata with the pdf). Likewise, while Photos can recognize faces, it does not recognize landmarks, different animals, and cars. So unless the user adds these keywords by hand (and goes through the motion to confirm all the faces), the computer is none the wiser. Now imagine a world where Photos would be able to recognize the Eiffel tower in the background, and conclude that at the time the photo was taken, you were most likely in Paris.

While that sounds like science fiction, it is actually a feature of Google Photos — which ships today. After uploading them, Google analyzes your photos on its servers, allowing you to search for “Paris 2003” to find the photo with the Eiffel Tower in the background (as well as a host of other pictures taken during that vacation, which are then associated by the time taken, so that not every picture identified as being from Paris needs to have the Eiffel Tower in the background).

These algorithms add context by automatically tagging photos based on what is in them, something that is a lot more powerful than manual tagging and flaky face recognition included with common pieces of DAM software. Moreover, they understand context, i.e. that the Eiffel Tower is located on the Champ de Mars in Paris. Similarly, imagine the possibilities if Spotlight's indexing were powered by something that resembles Google's search algorithms?

And privacy issues aside -- these are very serious, and will be addressed below -- from a technical perspective, a cloud-centric approach makes perfect sense: a powerful server in the cloud running 24/7 is more much suited to doing the hard number crunching than your iPad or iPhone. Just like Google adapts its search results to your profile (in no large part to serve you “better” ads), the automated file management system could take your idiosyncrasies into account.

Privacy and maintaining control over your data

While automated, intelligent indexing of user data in the cloud could bring about entirely new ways to work with our data, but they do reveal a lot about the user. What are you ok with Apple -- or Google -- knowing? Privacy concerns will shape the future of Apple's platforms, even though it enters as a design philosophy rather than a technology. The company has said publicly and repeatedly that protecting the user's privacy is paramount, that if given the choice, it does not want to know. Even in cases where transmission of user data becomes necessary, Apple will only transmit the minimal amount than what the functionality of its devices require in order to work.

Moreover, Apple is willing to sacrifice functionality for user privacy. Forcing certain information to stay on the device makes it harder to implement certain features, because you have less-sophisticated algorithms at your disposal, and necessarily also less context. After all, computing can only become more personal if your devices know more about your state, be it your schedule, where you live, what your habits are, or even your current heart rate.

On the other hand, to do any kind of processing on the data in the cloud, though, Apple needs to have access to user data. So where do you strike the balance between preserving the user's privacy and new features and capabilities? How much do you want to share with Apple, Dropbox or Google? Do you find the inherent risks acceptable (someone could hack your account)? For some people, the answer is yes. But no matter where you stand on this, I think most would agree that this should be a conscious decision on the user's part.

Moreover, apart from personal preference, there are cases where the user does not have a choice. Health care professionals must adhere to HIPAA's Privacy Rules, for instance, and in the private sector you may have clients who do not want their sensitive data on the third-party servers. It will be interesting to see how cloud services evolve to take on these challenges.

Continuous integration: the path to the new OS

Unlike with Mac OS 8 and Mac OS 9, OS X and iOS do not have glaring deficiencies that would preclude them from evolving or require a replacement of most of the underpinnings in one fellow swoop. Apple could replace only the parts that need replacement. Core Storage could pave the way for a new filesystem. The Finder could tap into the power of modern indexing algorithms by extending Spotlight. Time Machine's interface need not change at all, even if its underpinnings work very differently now. This is good news for us and for Apple.

How does the competition stack up?

To conclude, it is good to have a peek over the fence and look what the competition is up to. Not all the smart people only work for Apple; its competitors are well aware of these trends, and for the most part attempt to find their own path into the future.

Microsoft Windows and Azure

Microsoft's biggest problem since 2010 onwards has not been its lack of vision, but its execution. During the legendary All Things D interview of Steve Jobs and Bill Gates it was uncanny how much agreement there was between the two: “All things will be computers in different form factors, so what?”, and that the “different screen sizes,” as Gates put it, require different input paradigms and fulfill the expectation that all of your screens “work together.”

Microsoft is executing on that strategy by emphasizing the commonalities: all devices will run a version of Windows (one OS on different platforms). Literally, the Surface is a PC that can also replace a tablet at times. Apple, on the other hand, emphasizes the differences in the platforms (still one OS on different platforms), and they developed the iPad Pro from their new “default OS” (that which brings forth the children) by adding features over time, until it became a tablet that can replace a PC at times. Apple and Microsoft were aiming at the same point, but coming from different directions.

Apart from this philosophical difference, Microsoft has done much the same things: they have added new screen sizes, a big touch screen that doubles as a smart, computerized whiteboard, and they have developed; they adopted a new, well-respected programming language (C#) with new APIs. Microsoft also acknowledges the importance of the cloud with its Azure strategy. So I feel for the most part the new Microsoft is really “getting it”, although they have been less successful at executing their strategy in the user space.

Google, Android and Linux

As Linux has no appreciable presence as a desktop OS, the only relevant operating system for the purpose of this discussion is Android. Google's strategy is arguably more diverse and more complex, in part because they do not completely control all aspects of their platform (e. g. kernel development or flavors of Android based on AOSP).

Technically, any Linux-based OS has access to very advanced features such as modern filesystems and sophisticated schedulers, which can make use of the heterogeneous multiprocessing. The lack of OS updates for the lifetime of the device means developers cannot make use of many new features, and is the root cause of serious security problems.

Moreover, there is much more diversity: in part, this is because most Android handset manufacturers insist on skinning Android, and add their bits of code of varying quality as an attempt to differentiate themselves for the sake of differentiating themselves. Other times, handset makers (e. g. in China) have no choice but to base their efforts on “Android with the best parts missing”. Clearly, this makes it a lot harder for Google to develop its platform into the future, despite the fact that they have access and the capabilities to develop all of these modern technologies themselves.

The journey into the future

Unlike in 2001, when the Macintosh received a much-needed heart transplant, all of these changes can be rolled out gradually over time. Combined with the yearly release cycle, this makes changes harder to detect as they happen. So let us list some canaries in the coal mine:

  • When Apple releases the first new Swift-only API, it should likely be shared across all platforms (when this makes technological sense).
  • If the release schedule of core Apple apps included with OS X, such as Mail and Notes, is decoupled from OS X's release schedule, expect more frequent and continuous upgrades.
  • Watch out for enhancements to Core Storage, they will likely signal the (slow) the transition to Apple's next-gen filesystem.
  • Look for APIs that allow applications to automatically generate more metadata.
  • Wait for the last Macintosh with a spinning platter hard drive to be discontinued (currently, the iMac, the Mac mini and the very old non-Retina 13" MacBook Pro still come with old-school hard drives).
  • Have a look at how Share Sheets and Extensions evolve over time, they could be a crucial piece of the puzzle of how users manage files.

Given Apple's strategic moves in past years, it has been very adept at moving their platforms forward deliberately and patiently. It introduced a new compiler, size classes and multitasking, Extensions and its cross-platform Metal API, and these technologies became enablers of other technologies, sometimes in surprising ways. On the other hand, the optimism is tempered when looking at Apple's past track record in areas such as cloud services. It will be interesting to see what Apple's first move will be.

Acknowledgements

I would like to thank the team at MacNN for helpful suggestions and various corrections which helped improve this post.