OSCON 2013

OSCON Trip Report
Frank Fujimoto
July 22–26, 2013
Oregon Convention Center
Portland, OR

OSCON is the O’Reilly open source convention which is celebrating its 15th anniversary. It’s quite well-attended, and follows the standard format for conventions, with two days of tutorials followed by three days of the conference. My top five sessions (with links to my extended comments on each):

  • Effective Django: This was a very effective tutorial, even though there was much more material than could be fit into the session.
  • The Open Compute Project: The most interesting part was describing how Facebook deals with photos.
  • We the People: Open Source, Open Data: It was interesting to get a peek into how the White House has been using open source for the We The People petition site.
  • Practicing Deployment: This talk covered a lot of the things we also are working through, finding the best way to do reliable deployments.
  • LogStash: Yes, Logging Can Be Awesome: This is an open source alternative to Splunk. It does require a bunch of configuration and piecing things together, but looks pretty powerful.

Summary

There was a lot of activity and engagement in the conference. One of the most popular themes in the sessions was simplifying code, for maintainability and testability. The message that the organizers were trying to push was that everything which is open source needs some sort of license attached to it

The venue worked well, although I happened to have sessions which alternated between upstairs and downstairs. Fortunately, there was ample space between time slots, provided presenters didn’t go too far over time.

Wireless was pretty good, except for times of peak usage (such as during lunch).

I found it interesting to see how people were taking notes. Several people were using tablets, and a few people actually had pads of paper. For those using laptops (about half were Macs) I saw people typing in Evernote, composing emails to themselves, and several people editing in vi.

Tutorials

The Accidental DBA

Josh Berkus, PostgreSQL Experts, Inc.

This tutorial was meant as an introduction for people who find themselves responsible for managing a PostgreSQL database. Josh gave an overview of PostgreSQL’s versions, release schedule, and overall architecture (including extensions). He then covered basic configuration, including hardware (DBs require paying attention to RAM, CPU, and I/O, and how they work together), OS and filesystem (separating the DB and the transaction log is a good idea), and PostgreSQL itself (such as max_connections).

Connection pooling can be done with pgbouncer, but it was developed by Skype, and it’s not clear what the product’s future will be. It can be used on the DB server (for long-lasting connections) or on the application server (for short-lived connections). It can be configured to release connections after transactions (the most common setting) or statements (only useful if all queries are stateless).

Backups were discussed, and can be done on several levels. Point in time recovery (PITR) backups were discussed quite a bit, especially since they are the basis for replication, which was another large section of the tutorial.

Monitoring was the last broad subject, which also included configuring vacuum and checking for deadlocks.

MongoDB – From Zero to Sharded

Shaun Verch, 10-gen

MongoDB is a noSQL database which has become quite popular. The intent was to show the basics of how to use MongoDB and its core design principals. A simple application was to be shown, then the concepts of replication and sharding.

MongoDB is built around BSON, a binary representation of JSON. The format is used both for filesystem storage and the communication protocol.

A document is what would be thought of as a record in many other DBs, but it can be quite complex. While it’s technically a key/value pair, the value can be a dictionary consisting of more key/value pairs, so it can be much more complex than a row in SQL.

Like many other database products, MongoDB provides a front-end shell so you can interactively talk to the DB. It’s implemented using V8, the Google Javascript engine. It’s a good fit since JSON is so fundamental to the language.

Indexes are a big part of MongoDB, but you also need to put thought into what should be indexed. You can have multiple indexes, but too many will slow down the DB since each one will need to be updated for each document inserted or changed. Indexes also determine sort order of query results, and you can cascade them to help prevent collisions. You can also index by subdocuments.

The whole second half of the tutorial was spent on the various forms of replication. You can have replica sets, which means a master and at least one slave, with automatic failover. You can also tag the individual instances and have queries directed to only those MongoDBs with a certain tag.

You can also enable sharding, which means splitting the data over multiple MongoDB instances (or even multiple replica sets). This also requires a configuration server (three of them for production installations), then the shard servers can be configured. The shard keys need to be indexed, and should be chosen to have good read and write distribution.

Web Accessibility for the 21st Century

Deborah Kaplan and Denise Paloucci, Dreamwidth Studios

Previously, a lot of work on accessibility has been geared towards making things easier for assistive technology such as screen readers, but those comprise about 3&endash;5% of web usage. However, accessibility is a much broader subject, since it can also include making things work with different font sizes; up to half the web users change the font size of their browser to make things easier to read.

Designing accessibility is more of an art than a science, since many times there will be tension between making things accessible to multiple audiences. For example, sighted readers can glance at a page and have an idea of its structure, but it helps other users to have links to go to the main content (screen readers are getting better at recognizing these, but it still helps people who enlarge the font size).

There has been a big push behind semantic markup in recent years, and while that’s a good thing to do, it shouldn’t be done just to make a page more accessible for screen readers. Newer versions can handle both semantic and presentational tags, although it still helps to correctly mark headers and sections.

ARIA tags, which give tags a role (menu, etc.), are helpful for screen readers, but they’re even more useful when there is dynamic content. For example, if you have a box which pops up which is an alert, you can set the role to an alert and screen readers know to interrupt the normal flow of reading to let the user know it’s there.

Much of the other information covered in the tutorial are things which have been best practices for a while, so it’s good to know that while things are still progressing, previous knowledge is still useful.

Effective Django

Nathan Yergler, Eventbrite

This was a whirlwind introduction into deploying and using Django. I say whirlwind because Nathan could speak and type at the same time, and did both quickly. Those who couldn’t touch type with any speed ran into problems following along with the exercises.

The whole session was geared towards creating an address book web application. Nathan pointed things out as he came to them, such as how to make sure you know what library versions you’re pulling in when creating a new installation.

The first code was creating the project’s models, and how they tie into applications (a project is made of multiple applications). Once the models are built and hooked into the main application, unit tests should be written to test the model. Django has a testing framework which will create a fresh, empty DB to test against.

Writing the views was next, and they can be function-based or class-based. URLs are mapped to the views, and since the patterns are regular expressions, they can be very complex. Templates for the views can be put directly into the apps, which keeps them tightly bound. However, the views can refer to labels rather then converted to URLs by Django based on the mappings, which makes the apps more transportable.

The views can be tested by using the request factory, but he had us use Selenium instead, which will fire up a browser and test the results.

We also covered making the templates generic so they can be used in more than one place, as well as adding custom fields to forms. Time started to run short, so he skipped a couple sections, but the whole tutorial is available online.

Plenary sessions

On Open Intelligence

Jeff Hawkins, Numenta, Inc.

Jeff spoke about the Numenta Platform for Intelligent Computing (NuPIC) which tries to mimic how the neocortex takes inputs, recognizes patterns, and creates sequence memory. The project is open source, and the community is working towards machine intelligence based on cortical principles.

The Open Compute Project

Jay Parikh, Facebook

Jay gave an overview of the Open Compute Project, which was started by Facebook but which now has lots of contributors. He talked about how the amount of existing data has gone from 0.8 ZB (zetabytes) in 2010 to 2.8 ZB in 2012, and is expected to hit 40 ZB by 2020, if not sooner. The Open Compute Project has helped them change their model to provide more datacenter resources. The last pieces are networking hardware and software, and they’re working with companies to create vendor agnostic software.

The second half of Jay’s talk was about how Facebook handles photos. The numbers are huge: 250 billion photos, 350 million added per day, and storage growing by 7 petabytes per month. They’ve looked at demand (82% of the traffic is for 8% of the photos, mostly concentrated on newest content) and created “cold storage” data centers with about 1 exabyte per room which has no redundant electrical systems. These data centers cost 20% of a traditional one and the servers cost 35% of conventional storage servers. Presumably redundancy comes by putting content across multiple data centers.

Creating Communities of Inclusion

Mark Hinkle, Citrix

Mark covered many different kinds of inclusion. Some of his examples showed how different open source projects have different governance models, different kinds of participation, etc. Some showed that the larger community is important; some consider any user problems to be either code or documentation problems.

His main message, however, was that open shouldn’t end at software. He related the story of Jonathan Kuniholm, who came back from war having to be fitted with a prosthetic with a design that was patented in 1912. Thinking it could be a better experience, he started an open prosthetic project.

The Joy of Flying Robots with Clojure

Carin Meier, Neo

Carin described how she started off by describing how she became interested in robots, and when she got her first Roomba, she quickly learned she could program it. Not too long ago she acquired a quad-rotor copter and not only was able to program it, but was able to have it take inputs and act autonomously. She showed a demo of how she was able to have the two devices autonomously interact.

Open Source: The Secret Ingredient

Todd Greene, Media Temple

Todd’s presentation was short, but he encouraged people to find and work with extraordinary people (teammates, vendors, and customers), design with simplicity, be curious and experiment, and enjoy the journey.

inBloom Vision and Impact on Education

Sharren Bates, inBloom

Sharren also gave a short presentation, talking about improving personalized learning in the K-12 space.

Redefining What’s Possible on Mobile and Cloud

Mark Shuttleworth, Canonical Ltd.

In the proprietary world, the tools are like trains on rails, and the way to use them with the least friction is to not stray from their workflows. With open source, many tools are combined by people to make a product.

Mark held up a physical mockup of the new Ubuntu Edge phone.

Mark also talked about Juju, which looks like a large whiteboard where you can define and deploy cloud instances and services. A front-end fo the Mac is now available.

Diversity in the Innovation Economy: Why It Matters And What You Can Do About It

Laura Weidman Powers, CODE240

Laura started out saying that at first she was wondering why she was even asked to speak, since she isn’t a coder, although she is using Code Academy to learn Python during lunchtime. Her message was about race, and how African Americans and Latinos are not engaged in the tech sector at high rates. Even those who get degrees have problems getting jobs, since the degree to jobs pipeline is broken. Even more rare is minority placement in leadership.

Her foundation CODE2040 works to seed top minority software developer students with internships. She says no one told the students that getting good grades wasn’t enough, but they also need to have things on the side to help them stand out, such as contributions to open source projects.

Code Is Making Government More Effective

Jared Smith, Bluehost; Michael Migurski, Code For America; Tim O’Reilly, O’Reilly Media, Inc.

Jared, Michael, and Tim talked about Code for America, a non-profit which uses open source to help local governments. The idea is to get people more engaged on a civic level. Fellows are involved in a year-long program, where they start at a boot camp to learn about Code for America, then they co-locate with partner cities for a month. The rest of the time is spent in the San Francisco office to develop.

Government is one of the hardest areas to cause disruption. It’s so big it can’t be efficient, but at the same time it doesn’t have the resources to do what we want it to do. The program feels it’s had success in cities such as Oakland, where the software helps citizens to find information from the government using their language rather than governmental language.

Start with Freedom

Tom Preston-Werner, GitHub

Tom said he likes to work from first principles, a fundamental truth. The one he spoke to us about is freedom, which is a first principle for GitHub. He thinks it means business minimalism, where you only add process when absolutely necessary, when its lack causes pain points. GitHub doesn’t have traditional managers, and the structure is dictated by the communication channels between people.

He had a quote <http://michaelochurch.wordpress.com/2012/09/03/tech-companies-open-allocation-is-your-only-real-option/> from Michael O. Church: "When open allocation is in play, projects compete for engineers, and the result is better projects. When closed allocation is in force, engineers compete for projects, and the result is worse engineers."

Tom’s main message, however, was about licensing. Public domain code isn’t an option, since there isn’t a lot of legal precedence to completely remove copyright. There are a lot of licenses which aren’t appropriate since they have no limitation of liability clause.

The "kit" licenses (MIT, BSD, GPL, Apache, etc.) are good because they’re very clear in how they allow copying and modifying, while having a limited liability clause.

Licensing Models and Building an Open Source Community

Eileen Evans, HP

Eileen continued with the licensing message. An attorney by trade, she’s been tracking open source licensing models for quite a while. For the open source comunity to maintain its vibrancy, there needs to be technology, governance, and licensing. She divided the open source licenses into two broad caegories, copyleft (such as the GPL, where you must contribute work back to the community) and permissive (MIT, BSD, Apache, etc., where you can essentially do what you want).

She used to believe that copyleft licenses were needed to enforce participation in the open source community, but over time has seen the trend move towards permissive licenses.

We the People: Open Source, Open Data

Leigh Heyman, Executive Office of the President

Leigh talked about the We the People site which allows people to sign and create petitions which are seen directly by the White House. The most notable one was the Death Star Petition, and the response was just as tongue-in-cheek as the original petition. Leigh saw this as a good way to engage with the community.

The site launched in 2011 and has nearly 10 million users and 15 million signatures. The site is built with Drupal modules, and they have been shared on GitHub.

Leigh showed a clip of President Obama asking, "What’s the next big thing?" "APIs for whitehouse.gov." "APIs for whitehouse.gov? What the heck does that mean?" A read API exists for the petition site, and a write API (so people can put their own front-end for petitions) will be released soon.

They’ve held hack-a-thons at the White House, but had to explain to the Secret Service that "hack" in this sense was a good thing.

Turing’s Curse

John Graham-Cumming, CloudFlare

John ran through a brief history of computing highlights, showing that there are many things we think of as new which are really reincarnations of older technology, such as:

  • Big data, where a 1955 project worked out the distance between railway stations in the UK. There were 12 million points of data and 2K of memory.
  • Virtual machine hypervisor, which is what IBM used in 1967 for its VM operating system.
  • Event-driven programming, which was used in PL/1 in 1966.

Since, for the most part, what you’re trying to do has probably been done before, there’s a lot of value in a computer science education.

The one thing we have yet to conquer, however, is unreliability, and that’s an area where effort should be spent, to help programmer not only make fewer mistakes, but to also help find existing mistakes.

He cited Donald Knuth, who was asked, "Which programming language do you prefer, Java or C++?" "Which has the better debugger?"

Distilling Distinction

Robert "r0ml" Lefkowitz, Sharewave

Robert spoke about the importance of recognition and that it’s important to get it. There are a few ways that people in open source can get recognized, but those honors are only bestowed on a few people. Creating one’s own honors are problematic because they wouldn’t be recognized by the industry.

The best way he sees to recognize open source contributors is to nominate those people for honors either with the ACM or the IEEE. However, people need to be members to either vote or be recognized, so his push was to get people to join one or both of the societies and nominate notable open source contributors.

Sessions

Solving Embarrassingly Obvious Problems in Code

Garrett Smith, CloudBees

To make it eaiser to support code, you need to look at the parts which are not obvious and make them obvious. Doing so also has the side effects of introducing a separation of concerns (not too many things are in the same statement), limits the surface area for bugs, and improves testability. It makes it easier not only for you to go back to maintain the code, but for others to do so, too.

Practicing Deployment

Laura Thomson, Mozilla Corporation

This session was very popular, with people standing all around the room. Laura talked about the different models of deployment: initial (startup phase, developers push code, not a lot of tests), repeatable (often after first non-beta shipment, some documentaton), defined (start of automation, deployment often done by a sysadmin), managed (automated, verification done post-push, things are measured), and optimizing (continuous deployment, a lot of test automation, deployments are lightweight).

There are different levels of velocity of updates: critical mass (when there’s enough to push), single hard deadline (by date, usually shipping to a marketing plan), the train model (release at fixed intervals, ship whatever’s ready at that time), and continuous deployment (ship each change as soon as it’s done).

She had a lot of comments about tools and practices. Development environments are tricky because a laptop is not the best environment, VMs can be hard to maintain, DBs can be hard (need fake data or a small version). She advocates "try" servers for branches, a shared environment where branches can be instlaled. Staging servers must reflect production, have the same versions, be in the same proportions (number of DB to web front ends, etc.), and must be monitored.

It doesn’t matter what deployment tools you use, but they should be automated. Testing should be multi-faceted, with unit tests, acceptance tests, load tests, and smoke tests (the maximum load that can be supported with a particular build).

It’s more important to reduce the mean time to repair, rather than mean time between failures.

Laura’s final takeaways were that you should build the capability for continuous deployment even if you don’t intend to do it, and the only way to get good at deployment is to do it a lot.

More Code, More Problems

Edward Finkler, FictiveKin

Edward thinks that developers should concentrate on learning languages rather than learning frameworks. That way you don’t limit your options for solving problems.

You should write code by starting with small things with simple purposes, then those blocks can be put together. Less code is better than more code, since there’s less to manage and support. This also applies to code you depend on (such as libraries).

Discrete Math You Need to Know

Tim Berglund, GitHub, Inc.

This was a pretty fun talk, at least since I took a discreet math class a few decades ago. Tim started with different ways of counting (ordered or not, repeating or not) as a way of introducing permutations and combinations. He then introduced number theory, centered around finding greatest common divisors. After that came modular arithmetic, which is at the center of a lot of discrete math, especially when dealing with large numbers.

With that, Tim had introduced enough of a foundation to introduce an outline of how public/private key pairs work. Things got pretty abstract pretty quickly, and I’m guessing many people zoned out at that point (especially looking at the average rating for the session) going from zero to RSA in 40 minutes was pretty impressive.

while (true) do; how hard can it be to keep running

Caskey Dickson, Google

This was another popular session. Being from Google, Caskey concentrated on very large scaling. He started by saying that if you have a non-trivial number of servers, keeping a given daemon running at all times on each of them is difficult. For 300 servers, keeping that daemon running for six months is like keeping one daemon running on one server for 150 years.

Getting the right process running is pretty well understood, upgrading and rolling back that process to new/old versions are doable, but downgrades are hard. Of course, the easiest solution is to do everything manually, but that doesn’t scale.

He went through various iterations of periodic cron scripts and init scripts. He asserts that saving the process ID in a file is a non-starter, especially since by the time you read the contents what’s actually running might have changed.

His suggestion is to run upstart (like init), and do all configuration in the upstart script (should be non-destructive, of course). Your installation scripts should not assume they’re an upgtrade, but should handle that case (killing the existing daemon). Also, the daemons should not do fork/execs so they can be the direct children of upstart.

As a last note, Caskey recommends doing health checks to make sure the processes are doing the right thing, not just running.

Planned Obsolescence: Built to Last, or Build one to Throw Away?

Ian Dees, Tektronix, and Baq Haidri, LinkedIn

Ian and Baq opened by comparing temporary physical constructions with temporary code by quoting James Gleik, who said that computer programs "…are machines with far more moving parts than any engine: the parts don’t wear out, but they interact and rub up against one another in ways the programmers themselves cannot predict."

They followed by saying that when code is written, the tendency is to write it to last for a long time and be used in many different situations.. With that, they outlined their five stages of an engineer.

As engineers get more experience, they learn they don’t need to make things complex (indeed they’re not judged by how complex they can make things) and learn the right level of abstraction which is needed to make things maintainable. They find that simple, boring code is OK, and interfaces between code blocks can make things more testable and convenient.

They closed with Dereck Bailey’s rule of three, "If you need something once, build it. If you need something twice, pay attention. If you need it a third time, abstract it." One of the reasons to wait until three is two data points can always be fit by a line, but three points can show a pattern.

Database Performance: What Really Matters?

Peter Zaitsev, Percona Inc.

The big challenge for increasing database performance is that in the end, users care about application performance, so tuning a database has to be taken as part of the whole application. The key metric is response time for transactions.

Things that can impact performance are changes to transaction volumme (more users), changes to the mix of transaction types, data volume, changes in application features, and environment changes (network, hardware).

He likened optimizing application request processing to manufacturing, where you need to identify and protect the point which is the bottleneck, and you need to minimize the amount of work which is in progress at any one time. He also made analogies to call centers, where queueing theory says you need to care about response time (wait time plus service time), not just service time.

For load testing, it’s best if you can capture a series of transactions and replay them. If you’re using a test system, be sure to add side loads, too (things on the production box other than the DB). While this testing will catch persistent problems, transient problems will still be hard to catch.

LogStash: Yes, Logging Can Be Awesome

James Turnbull, Puppet Labs

LogStash is similar to Splunk, in that it’s a system to gather and aggregate log files. It knows several different time formats (and can be taught more) and knows how to consolidate multi-line log entries.

The application is highly geared towards scaling and uses an Elasticsearch. Data can be extracted to other tools such as Graphite.

The LogStash maintainers feel strongly that any user problems are bugs, either in implementation or documentation.

Interactive debugging in PHP: stealing the good bits from Ruby and Javascript

Justin Hileman, Presentate

Justin started off saying that PHP’s heritage can be linked to many sources, such as packaging from Ruby and APIs from both Perl and C. However, most people debug by essentially dumping the values of all variables at various parts of the script to see what the state was at that point.

Other languages have a couple tools available, an interactive command line interpreter and a step debugger. He demonstrated the latter by using Xdebug and Codebug as a front end, and the with Psysh.

Adventures in Node.js

Faisal Abid, Dynamatik, Inc.

Faisal seemed to be more of a fan and user Node rather than a developer of it, but he’s done a lot of work with it.

His demo allowed the audience to control a small monkey sprite on the display. The application was, of course, written in Node. However, it got to the point where there were so many monkeys that you couldn’t read the slides, so he switched to the static presentation.

He ran through some demonstrations, and concluded that good reasons to use Node are that it lets you easily build applications using existing skills, there’s huge community support, and it’s a fast-growing platform.

Tuning TCP for the Web

Jason Cook, Fastly

Since Jason works for a CDN, he’s very familiar with the need for high-performance networking, and has gained a lot of experience tuning TCP. They optimize for small requests such as HTML, Javascript, and CSS, since that’s the bulk of the requests they see.

Some of the things to tune are the backlog (number of connections which can be waiting for the application to accept them), SYN cookies (should be enabled, even though it disables large windows), and TIME_WAIT (timeout value should be lowered from 120 seconds to about 10 and the number of buckets should be increased).

He discussed TCP slow start, and how things have changed in the latest Linux kernels. You should change the congestion window so the entire SSL certificate can fit in one packet.

HTTP keepalives should be enabled if at all possible, especially for SSL connections, since the initial handshake is the most computationally intensive part. It can have quite an impact over slow connections.

His takeaways were that you should upgrade your kernel so you can increase the initial congestion window, check the backlog and TIME_WAIT limits, resize buffers to something reasonable, and if possible, get closer (network-wise) to your users.

Offline strategies for HTML5 web applications

Stephan Hochdörfer, bitExpert AG

The landscape for HTML offline strategies is changing. Before, people used to stash as much into cookies as they could, but those are limited to 4k of data, and are always sent back to the server, even if they’re not necessary. Internet Explorer introduced DHTML behaviors, where user could store data, but never had cross-browser support. Flash cookies became available, but Flash is not available on portable devices. Google introduced Gears, but it was a plug-in and is no longer supported.

There’s an HTML application cache, where you can store objects in the browser itself. You can define which objects will be saved (default is everything), but you have to be aware of how aggressive the cache can be, especialaly if the user never clicks refresh.

Web storage offers local and session storage through Javascript objects. The store is structured as key/value pairs, is unstructured, and has no transaction support. You also can’t set data to automatically expire, and there is inadequate information about how much space you can use.

The web SQL database offers SQLite through the browser, but can be pretty slow and has been deprecated (as of 2010).

The IndexedDB offers a compromise between web storage and the web SQL database. It also stores Javascript objects.

There’s also a file API for saving things directly on the filesystem.

Cryptography Pitfalls

John Downey, Braintree

John started with a brief history of cryptography, but right off the bat warned that cryptography itself is very strong, but how it’s tied into systems is usually where things go wrong. He likened it to installing a bank vault door on the entrance of a tent.

His overall recommendations are to use SSL, SSH, VPNs, or IPSec for data in transit, use GnuPG for data on disk, and to be sure to use an existing high-level library, since it’s very difficult to get things right.

He then went over some pitfalls of various ciphers, such as insufficient random number generation, length extension attacks, electronic code book modes, password storage (can delegate auth via OAuth, etc. but if not, use one-way ciphers), SSL cert verification, and trust (SSH keys, SSL CAs, etc.)

A Short History of Random Numbers, and Why You Need to Care

Matthew Garrett, Nebula

As the talk title says, Matthew went over a history of random numbers. He described both random and pseudo-random numbers, explaining that sometimes scientists like pseudo-random numbers because they can be recreated. However, you need to be sure to use truly random input to seed the generator for real-world usages. It’s becoming more critical with virtual servers, since it’s theoretically possible to discover initial state of another guest by observation.

Get Off Your Butt – Tips and Techniques For Those Who Sit A Lot

Jason Levitt, Spirit.io, Clayton Aynesworth, Alternative Healing of Austin

As the talk title suggests, this was a discussion centering around health and sedentary lifestyles. One study showed a couple interesting results: most adults need at least 30 minutes of at least moderate physical activity at least five days a week, and that sitting too much reduces your body’s ability to burn caleries. As for the latter finding, restricted physical activity has been reported to result in a ten-fold decrease in lipoprotein lipase activity (breaking down protien into engery instead of storing as fat) in muscle fibers.

For now, the best advice is to be more active and sit less. They discussed standing desks, and other strategies, such as walking meetings.

They also discussed ergonomics, especially keeping your spine aligned, which reduces the pressure on your upper spine (every inch forward your head is from the optimal position, it puts ten pounds of extra weight on the spine). Your body tries to adjust, and at some point your muscles become used to being in those positions. Clayton demonstrated some devices which helps your body keep in better positions.