LCE13: Test and Validation Mini-Summit: Review Current Linaro Engineering Processes

Post on 18-Nov-2014

166 views 1 download

description

Resource: LCE13 Name: Test and Validation Mini-Summit: Review Current Linaro Engineering Processes Date: 09-07-2013 Speaker: Video: http://youtu.be/UaRRVASYywk

transcript

Linaro Test and Validation Summit

Linaro Engineering Teams

LCE13 - Dublin, July 2013

How Do We Better Test our Engineering

2nd Half

PART 1: Linaro Platform

Overview

LAVACitius, Altius, Fortius"Faster, Higher, Stronger"

Builds & CIBuild Your Code When You are Not Looking

QA ServicesCover all bases

PART 2: Linaro Engineering

Kernel Developers& Maintainers

Landing Teams Linaro GroupsLEG / LNG

( Do they use CI ? Manual or Automated Testing? )* How do they validate/verify their output? * What/how do they develop?

& easier to use

Linaro Test and Validation Summit

Mike Holmes

LCE13 - Dublin, July 2013

LNG Engineering

● Create: Two Linux kernels (with and without RT) and a yocto filesystem.

● Benchmark: RT + KVM + HugePage + Dataplane APIs Required to test kernel and userspace performance, some tests may be run in both spaces.

● Platforms: Arndale, AM335x Starter Kit ? (LSI & TI boards in future ?) QEMU - versatile Express ?

LNG outputs

● Our code is validated using CI, and performance trends monitored.

● Our output is verified on one general purpose ARM platform and against two SoC vendor platforms via a configurable switch to allow for dedicated links between nodes under test.

● Using open source software, we use one realistic network application, a general purpose benchmark and five feature specific test suites.

LNG outputs are verified by

Automated testing is done using● custom scripts run via jenkins & lava executing

○ (RT) LTP (Real Time Test Tree)○ (RT) Cyclictest ○ (RT) Hackbench○ (KVM) virt-test○ (Hugepage) sysbench OLTP○ (KVM, Hugepage, RT) openvswitch (Kernel and

userspace)○ (KVM, Hugepage, RT) netperf○ Traffic test cases via pcap files and tcpreplay

LNG uses these tools

● We test against three branches○ linux-lng-tip (development)○ linux-lng-lsk (bug fixes to stable)○ linux-lng-lsk-RT (bug fixes to stable RT variant)

● LNG specific CFG fragments○ KVM (or will this be in a lsk kernel per default?)○ PREEMPT_RT○ NO_HZ_FULL (or will this be in a lsk kernel per

default?)○ HUGEPAGE (is that a CFG option?)

LNG Kernel branches / configuration

● Some of the SoC vendors hardware has up to 16 x 10Gb links, generating this much traffic is non trivial.

● Tests equipment such as IXIA traffic generators are expensive.

● Test equipment needs to be remotely switched between the different hardware under test in an automated way

● Scheduling test runs that take days and requires specific equipment to be dedicated to the task.

LNG unique challenges

● Multiple nodes may be needed to test traffic interoperability.

● It is not feasible to replicate the test environment at every developer's desk.

● the applied RT patch even when disabled, alters the execution paths

● Some test run for 24 hours or more

LNG unique challenges

Questions○ LAVA is(isn't) working for us

■ Interactive shells in the LAVA environment would speed debugging given that testing can only be performed with the test equipment in the lab

■ Multinode testing, with the reservation and configuration of network switches is required.

■ Long term trends in performance data need to analysed and compared for regression analysis, triggering alerts for deviations.

○ Further thoughts on Friday○ https://lce-13.zerista.

com/event/member/79674

LNG Q&A

Linaro Test and Validation Summit

Scott Bambrough

LCE13 - Dublin, July 2013

Landing Team Engineering

● Bootloaders● Linux kernels based on mainline or current

RC's● Linux kernels based on LSK (expected) ● Ubuntu member builds● Android member builds● ALIP member build

Some outputs are public, others confidential.

LT Outputs

● Kernel code is validated using CI in the Linaro LAVA Lab, on various member hardware devices and ARM fast models.

● Our kernel code is also validated in member LAVA labs on both current and next gen hardware.

● Our builds at present are a sanity tested by the LT's but most testing is done by piggybacking on QA or automated testing set up by the platform team.

Verification of LT outputs

● Currently run only basic compile boot test + default CI tests (LTP, powermgmt)

● This needs to change, we want/need to do more● We need more SoC level tests, having LT's aware of

how to produce tests to run in LAVA will become more important

LT and kernel tests

1. Much better LAVA documentation2. Document the tests themselves3. Infrastructure for testing4. Infrastructure for better analysis of results

LT & Member Services Needs

● Deployment Guide○ what are the hardware requirements for a LAB○ what are the infrastructure requirements for a LAB○ hardware setup, software installation instructions

● Administrator's Guide○ basically how Dave Piggot does his job○ after initial setup, day to day ops and maintenance

Better Documentation

● Test Developer's Guide○ how to integrate tests to be run in lava-test-shell

(lava glue)○ recommendations on how best to write tests for lava-

test-shell

● User's Guide for lava-test-shell○ for developers to use lava-test-shell○ section devoted to using lava-test-shell in workflow

of kernel developer?

Better Documentation

● Impossible to answer the question: What tests are available in LAVA?

● http://lava-test.readthedocs.org/en/latest/index.html○ not sufficient, not up to date○ problem isn't LAVA team, Linaro needs an

acceptance policy on what a test has available before being used in LAVA

● would like to see meta-data in test documentation that can be used in test reports○ in a format that can be used in report generation

Document the tests

● Buddy systems○ TI LT developed tests that require access to

reference material for comparison■ video frame captures■ audio filed

○ TI LT audio/video tests required external box to capture hdmi/audio output

○ Need to do more of this type of automated testing to verify that lower level functions work correctly at BSP level

○ GStreamer insanity test suite requires access to multimedia content

Infrastructure for Testing

● Web dashboard won't cut it● need to separate analysis from display

○ rather do an analysis, then decide how to display● why infrastructure?

○ think there should be a level of reuse for components used to do analysis

○ think these should be separate from LAVA○ think of this a more of a data mining operation

Infrastructure for Analysis

example:● generate test report as PDF

○ perform tests, generate a report○ include metadata regarding tests

■ metadata from test documentation?

example:● test report comparing:

○ current member BSP kernel○ current LT kernel based on mainline

● evidence of quality/stability of LT/mainline kernel● could be used to convince product teams

Infrastructure for Analysis

example:● regression analysis of kernel changes

○ perform tests one day, make changes, test next○ did any test results change?

■ yes, send report of changes via email

example:● generate test report as PDF

○ perform tests, generate a report○ include metadata regarding tests

■ metadata from test documentation?

Infrastructure for Analysis

example:● test report comparing:

○ current member BSP kernel○ current LT kernel based on mainline

● evidence of quality/stability of LT/mainline kernel● could be used to convince product teams

Infrastructure for Analysis

Linaro Test and Validation Summit

Kevin Hilman

LCE13 - Dublin, July 2013

Kernel Developer/Maintainer

Most kernel development is done with little or no automation

● build: local, custom build scripts● boot: manual boot testing on local hardware● debug: custom unit-test scripts, manual verification of

results● publish: to public mailing lists● merged: into maintainer trees, linux-next● test: manual test of maintainer trees, linux-next

○ but many (most?) developers don't do this

Current workflow: development

● Code review on mailing list● build/boot testing by maintainers● build testing in linux-next (manual)

○ several developers do manual build tests of their pet platforms in linux-next and report failures

● Intel's 0-day tester (automated, but closed)○ regular, automatic build tests○ multi-arch build tests○ boot tests (x86)○ automatic git bisect for failures○ very fast results○ detailed email reports○ extremely useful

Current workflow: validation

This model is "good enough" for most developers and maintainers, so...

Why should we use Jenkins/LAVA?

Linaro test/validation will have to be● at least as easy to use (locally and remotely)● output/results more useful● faster

○ build time○ diagnostic time

Current workflow: "good enough"

● Local testing: aid in build, boot, test cycle○ local LAVA install, using local boards○ reduce duplication of custom scripts/setup○ encourage writing LAVA-ready tests○ easy to switch between local, and remote LAVA lab

● Remote CI: broader coverage○ "I'm about ready to push this, I wonder if broke any

other platforms..."○ automatic, fast (ish) response

Potential Usage models

● Has to be easy to install○ packaged (deb, rpm)○ or git repo for development (bzr is ......)

● Has to fit into existing developer work flow○ LAVA does not exclusively own hardware○ developers have non-Linaro platforms○ command-line driven○ must co-exist with existing interactive use of boards

■ existing Apache setup■ existing TFTP setup■ existing, customized bootloaders■ ...

Local testing: LAVA

● Broad testing● multi-arch (not just ARM)● ARM: all defconfigs (not just Linaro boards)

○ also: allnoconfig, allmodconfig, randconfig, ...● Continuous builds

○ Linus' tree, linux-next, arm-soc/for-next, ...○ developers can submit their own branches

● On-demand builds○ register a tree/branch○ push triggers a build

● fast, automatic reporting of failures○ without manual monitoring/clicking through jenkins

Remote CI

Tracking build breakage in upstream trees

● when did build start breaking● what are the exact build error messages

(without Jenkins click fest)● which commit (probably) broke the build

○ automated bisect

Useful output: build testing

Where is the line between Jenkins and LAVA?

● Jenkins == build, LAVA == test?

● when a LAVA test fails how do I know...○ was this a new/updated test?○ was this a new/updated kernel? ○ if so, can I get to the Jenkins build?

In less than 10 clicks?

Issues: Big picture

● "Master image" is not useful○ LAVA assumes are powered on and running master

image (or will reboot into master image)○ assumptions about SD card existence, partitioning...○ assumptions about shell prompts

linaro-test [rc=0] #○ etc. etc.

● Goal: LAVA directly controls bootloader○ netboot: get kernel + DTB + initrd via TFTP○ extension via board-specific bootloader scripting

Tyler's new "bootloader" device support in LAVA appears to have mostly solved this !!

Issues: LAVA design

● Terminology learning curve○ dispatcher, scheduler, dashboard○ device, device-type○ What is a bundle?○ WTF is a bundle stream?○ Documentation... not helpful (enough said)

● Navigation○ click intensive○ how to get from a log to the test results? or...○ from a test back to the boot log?○ what about build log (Jenkins?)○ can I navigate from Jenkins log to the LAVA test?

Issues: LAVA usability

Kernel + modules: omap2plus_defconfig● 1 minute

○ hackbox.linaro.org (-j48: 12 x 3.5GHz Xeon, 24G)● 1.5 minutes

○ khilman local (-j24: 6 x 3.3GHz i7, 16G RAM)● 8 minutes

○ Macbook Air (-j8: 2 x 1.8GHz i7, 4G)● 14 minutes

○ Thinkpad T61 (-j4: 2 x Core2Duo, 4G RAM)● 16 minutes

○ Linaro Jenkins (-j8: EC2 node, built in tmpfs)● 17 minutes

○ ARM chromebook (-j4: 2 x 1.7 GHz A15, 2G RAM)

Issues: Jenkins performance

Linaro Test and Validation Summit

Grant Likely

LCE13 - Dublin, July 2013

LEG Engineering