2016-05-29

Ubuntu on Dell Latitude E6420 with NVidia and Broadcom

My company sold old laptops to employees and I decided to use the chance to get an affordable and legally licensed Windows 10 system - a Dell Latitude E6420. Unfortunately the system has a Broadcom Wifi card and also ships with an NVidia graphics card which require extra work on Ubuntu 16.04 Xenial Xerus.

After some manual configuration the system works quite well with a power consumption of about 10-15W while writing this blog article. Switching between the Intel and the NVidia graphics card is simple (with a GUI program and requires a logout-login), for most use cases I don't need the NVidia card in any case.

Windows 10 also works well, although it does not support all devices. However, the combined NVidia / Intel graphics systems works better on Windows than on Linux.

In detail, I took the following steps to install an Ubuntu 16.04 and Windows 10 dual boot system.

Step-by-Step Installation

Requirements

  • Either a wired network connection or a USB wifi dongle that works in Ubuntu without additional drivers.
  • 4GB USB thumb drive or 2 empty DVDs or 1 re-writable DVD
  • 2 hours time

Install Windows

  1. Update the firmware to version A23 (use the preinstalled Windows 7 for this task)
  2. Go through the BIOS setup. 
    1. Make sure to switch the system to UEFI mode and enable booting off USB or DVD. This really simplifies the multi-OS setup as all operating systems share the same EFI system partition
    2. Download the Windows 10 media creator tool and use it to create a USB drive or DVD
    3. Insert the installation media and start the laptop. Press F12 to open the BIOS menu and select the installation media in the UEFI section.
    4. Install Windows 10. In the hard disk setup simply delete all partitions so that Windows 10 will create its default layout.
    5. Let Windows 10 do its job, rebooting several times. Use the provided Windows 7 product key for Windows 10 and let it activate over the Internet.
    6. All basic drivers will install automatically, some question marks remain in the device manager. Dell does not provide official Windows 10 drivers, so one would have to search the internet for specific solutions. However, Dell provides an overview page for Windows 10 on E6420.

      Install Ubuntu

      1. Create the Ubuntu installation media.
      2. Boot the laptop. Press F12 when it starts and select the installation media in the UEFI section of the BIOS menu.
      3. Select "Install Ubuntu" in the boot menu. Choose to install Ubuntu together with Windows. In the disk partitioning dialog reduce the size of the Windows partition to make room for Ubuntu. Leave Windows at least 50GB, otherwise you won't be able to do much with it.
      4. Let Ubuntu finish its installation and boot into Ubuntu.

      Optimize and Configure Ubuntu

      The default installation needs some additional packages to work well. Make sure that Ubuntu has an internet connection (wired or via a supported USB wifi dongle).

      Note: For the Broadcom WiFi adapter there are several possible drivers in Ubuntu. By default it will install the wl driver which was not working well for me and caused crashes. The b43 driver works for me, although the Wifi performance is rather low.

      Note: The HDMI output of the laptop is connected to the NVidia graphics chip. Therefore you can use it only when the system uses the
      1. Update Ubuntu and reboot:
        sudo apt update
        sudo apt full-upgrade
        sudo reboot
      2. Install the following packages and reboot:
        sudo apt install firmware-b43-installer \
            nvidia-361 nvidia-prime bbswitch-dkms \
            vdpauinfo libvdpau-va-gl1 \
            mesa-utils powertop
      3. Confirm that the builtin WiFi works now.
      4. Add the following line to /etc/rc.local before the exit 0 line:
        powertop --auto-tune
      5. Reboot
      6. Check that 3D acceleration works with NVidia:
        glxinfo | grep renderer\ string
        OpenGL renderer string: NVS 4200M/PCIe/SSE2
      7. Check that VDPAU acceleration works with NVidia:
        vdpauinfo | grep string
        Information string: NVIDIA VDPAU Driver Shared Library  361.42  Tue Mar 22 17:29:16 PDT 2016
      8. Open nvidia-settings and switch to the Intel GPU (you will have to confirm with your password):
      9. Logout and log back in. Confirm that 3D acceleration works now:
        glxinfo | grep renderer\ string
        OpenGL renderer string: Mesa DRI Intel(R) Sandybridge Mobile
      10. Confirm that the NVidia graphics card is actually switched off:
        cat /proc/acpi/bbswitch
        0000:01:00.0 OFF
      11. Confirm that VDPAU acceleration works:
        vdpauinfo | grep string
        libva info: VA-API version 0.39.0
        libva info: va_getDriverName() returns 0
        libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so
        libva info: Found init function __vaDriverInit_0_39
        libva info: va_openDriver() returns 0
        Information string: OpenGL/VAAPI/libswscale backend for VDPAU
      12. Check that the power consumption is somewhere between 10W and 15W:

      Resources

      PCI Devices (lspci)


      Screen Configuration (NVidia)


      Screen Configuration (Intel)



      2016-05-19

      Lifting the Curse of Static Credentials

      Summary: Use digital identities, trust relationship and access control lists instead of passwords. In the cloud, this is really easy.

      I strongly believe that static credentials are one of the biggest hazards in modern IT environments. Most information security incidents are somehow related to lost or leaked or guessed static credentials, Instagram's Million Dollar Bug is just one very nice example. Static credentials

      • can be used by anyone who has them - friend or foe
      • are typically very short and can even be brute forced or guessed
      • for machine or service users have to be stored in configuration files from where they can be leaked
      • are hard to remember for humans so that they will write them down somewhere or store them in files
      • typically stay the same over a long period of time
      • don't include any information about the identity of the bearer or user
      • are hard to rotate on a regular base because the change has to happen in several places at the same time
      All those problems disappear if we use digital identities and trust relationships instead of static credentials. Unfortunately static credentials are incredibly easy to use which makes them hard to eradicate.

      Static credentials are from the medieval ages

      Source: Dr. Pepper ad from 1963 / Johnny Hart
      Back in time passwords or watchwords where state of the art. For example membership in a secret club or belonging to a certain town could be proven with a "secret" password. Improvements where "password of the day" (nice for watchtower situations) or even "challenge response" methods like completing a secret poem or providing a specific answer to a common question.

      Basically everything we do with static credentials, for example a website login, follows exactly those early patterns. Even though the real world has moved on to identity based access control in the 19th and 20th century. Passports and ID cards certify the identity of the bearer and the border control checks if the passport is valid, if the person presents his/her own passport and decides if the person is allowed passage. Nobody would even think about granting access to anything in the real world in exchange for just a password.

      So why is IT security stuck in the medieval ages? IMHO convenience and lack of simple and wide spread standards. We see static credentials almost everywhere in our daily business:
      • Website logins - who does not use a password manager? Only very few websites manage without passwords
      • Database credentials - are probably the least rotated credentials of all
      • Work account login - your phone stores that for you
      • SSH keys - key passphrases don't add much security, SSH key security is much underestimated
      • ...
      Sadly, agreeing upon static credentials and manually managing them is still the only universally applicable, compatible and standardized method of access protection that we know in IT.

      Modern IT

      Luckily in professional environments there is a way out. In a fully controlled environment there should be no need for static credentials. Every user and every machine can be equipped with a digital identity whose static parts are stored in secure hardware storage (e.g. YubiKey and TPM). Beyond that all communication and access can be granted based on those digital identities. Temporary grants by a granting authority and access control lists give access to resources. The same identity can be used everywhere thereby eliminating the need for static credentials.

      Kerberos and TLS certificates are well known and established implementations of such concepts. Sadly many popular software solutions still don't support them or make their use unnecessary complicated. As the need to use certain software typically wins over the wish to have tight security we users end up dealing with lots of static credentials. The security risk is deemed acceptable as those systems are mostly accessible from inside only. Instagram's Million Dollar Bug of course proves the folly of this thought. A chain of static AWS credentials found in various configuration files allowed exploiting everything:
      Source: Instagram's Million Dollar Bug (Internet Archive) / Wesley
      Facebook obviously did not think about the fact that static AWS credentials can be used by everyone from everywhere.

      The Cloud Challenge

      As we move more and more IT functions into the Cloud the problem of static credentials gains a new dimension: Most of our resources and services are "out there somewhere" and not part of our internal network. There is absolutely no additional layer of security! Anybody who has the static credential can use them and you won't even notice it.

      Luckily Cloud providers like Amazon Web Services (AWS) also have a solution for that problem: AWS Identity and Access Management (IAM) provides the security backbone for all communication between machines and users one one side and Amazon APIs on the other hand. Any code that runs on AWS is assigned a digital identity (EC2 Instance Role, Lambda Execution Role) which provides temporary credentials via the EC2 instance metadata interface. Those credentials are then used to authenticate API calls to AWS APIs.

      As a result it is possible to run an entire data center on AWS without the need for static credentials for internal communication. Attackers who gain internal access will not be able to access more resources than the exploited service had access to. Eradicating internal static credentials would therefore have prevented Instagram's Million Dollar Bug.

      Avoid Static Credentials

      In a world of automation static credentials are often a nuisance. They have to be added to all kind of configuration files while protecting them from as many eyes as possible. In the end, many secrets management solutions only protect the secrets from the admins and casual observers but do not prevent leaked secrets in general. Identity-based security actually helps in automated environments. The problem of static credentials is reduced to just one set for the digital identity. All other communication just uses that identity for authentication.

      Eradicating static credentials and using digital identities not only significantly improves security but also assists in automating everything.

      If you use AWS, start today to replace static AWS credentials by IAM roles. Use the AWS Federation Proxy to provide IAM roles to containers and on-premise servers in order to remove static AWS credentials from both your public and your private cloud environments.

      For your local environment, use Kerberos pass-through authentication instead of service passwords.

      For websites try to use federated logins (e.g. OpenID Connect) and favor those that don't need a password.

      For your own stuff, be sure to enable 2 factor authentication and storing certificates and private keys in hardware tokens like YubiKey.

      2016-05-13

      CoreOS Fest 2016 - Container are production ready!

      The CoreOS Fest 2016 in Berlin impressed me very much: A small Open Source company organizes a 2 day conference around their Open Source tools and even flies in a lot of their employees from San Francisco. A win both for Open Source and for Berlin. And CoreOS also announced that they got new funding of $28M:
      Alex Polvi, CEO of CoreOS
      More interesting for IT people everywhere is the message one can learn here: Container technologies are ready for production. There is a healthy environment of

      In fact, choosing the "right" platform starts to become the main problem for those who still run on traditional Virtualization platforms. On the other hand, IT companies who don't evaluate containers in 2016 will be missing out big time.

      The hope remains that with the now emerging technologies one does not need to build up a team of support engineers just to run the platform.


      2016-05-03

      OSDC 2016 - Hybrid Cloud

      The Open Source Data Center Conference 2016 is a good measure for how the industry changes. Compared to 2014 Cloud topics take more and more space. Both how to build your own on-premise cloud with Mesos, CoreOS or Kubernetes but also how to use the public Cloud.

      Maybe not surprising, I used the conference to present my own findings from 2 years of Cloud migration at ImmobilienScout24:


      After we first tried to find  way to quickly migrate our data centers into the Cloud we now see that a hybrid approach works better. Data center and cloud are both valued platforms and we will optimize the costs between them.

      Hybrid Cloud - A Cloud Migration Strategy

      Do you use Cloud? Why? What about the 15 year legacy of your data center? How many Enterprise vendors tried to sell you their "Hybrid Cloud" solution? What actually is a Hybrid Cloud?

      Cloud computing is not just a new way of running servers or Docker containers. The interesting part of any Cloud offering are managed services that provide solutions to difficult problems. Prime examples are messaging (SNS/SQS), distributed storage (S3), managed databases (RDS) and especially turn-key solutions like managed Hadoop (EMR).

      Hybrid Cloud is usually understood as a way to unify or standardize server hosting across private data centers and Public Cloud vendors. Some Hybrid Cloud solutions even go as far as providing a unified API that abstracts away all the differences between different platforms. Unfortunately that approach focuses on the lowest common denominator and effectively prevents using the advanced services that each Cloud vendor also offers. However, these service are the true value of Public Cloud vendors.

      Another approach to integrating Public Cloud and private data centers is using services from both worlds depending on the problems to solve. Don't hide the cloud technologies but make it simple to use them - both from within the data center and the cloud instances. Create a bridge between the old world of the data center and the new world of the Public Cloud. A good bridge will motivate your developers to move the company to the cloud.

      Based upon recent developments at ImmobilienScout24, this talk tries to suggest a sustainable Cloud migration strategy from private data centers through a Hybrid Cloud into the AWS Cloud.
      • Bridging the security model of the data center with the security model of AWS.
      • Integrating the AWS identity management (IAM) with the existing servers in the data center.
      • Secure communication between services running in the data center and in AWS.
      • Deploying data center servers and Cloud resources together.
      • Service discovery for services running both in the data center and AWS.
      Most of the tools used are Open Source and this talk will show how they come together to support this strategy:

      As soon as the video is published I will update the talk here.