Search This Blog

Inside Alex Honnold’s Tricked-Out New Adventure Van

Back in 2014, pro climber Alex Honnold gave us a tour of the 2002 Ford Econoline E150 he used as his mobile base camp. That van served him...

Top strip

Saturday, September 1, 2018

From the Pages of Outdoor News – Sept. 1, 2018 https://ift.tt/2LMwyJR

Skill At Arms Experience Days - Safe Rifle Shooting Events www.skillatarms.co.uk

Skill At Arms Experience Days - Safe Rifle Shooting Events www.skillatarms.co.uk submitted by /u/VyvyanGreenall
[link] [comments]


from Outdoors https://ift.tt/2NeFMU5

What Pros Wear (and Eat) During UTMB

Boulder Denim Launches a New Line of Performance Jeans

Boulder Denim was one of the first manufactures to incorporate stretchy Lycra into jeans when it launched in 2015. Since then, performance jeans—jeans that are comfortable and look good but don’t hinder you when climbing, running, or biking—have exploded, with major brands like Patagonia, Levi’s, and Black Diamond all making their own versions.

This month, Boulder Denim is launching Boulder Denim 2.0, a new product that has the same performance use in mind but employs more-comfortable fabric and has more everyday-wear features. The new jeans are 83 percent cotton, 14 percent polyester, and three percent Lycra in a soft twill blend that makes them even harder to rip or tear yet lightweight and more breathable than other denims on the market. The fabric moves with the body, and each pair is shaped with a waistband designed to work with the wearer’s curves, hugging contours and reducing gapping. The curse of regular jeans is that they stretch and loosen over time, but with 90 percent shape memory and reinforced stitching, Boulder Denim 2.0 retains its fit no matter the activity. The PFC-free hydrophobic DWR treatment repels sweat, spills, and stains to help keep pants dry and clean longer. 

My favorite feature of the Boulder Denim 2.0s is the super-deep pockets (they’re easily the deepest jean pockets I’ve ever seen), which are designed to keep items from falling out when you’re on the move. Additionally, there’s a small front pocket with a hidden zipper to secure valuables.

Currently available for preorder on Kickstarter, the jeans come in four different models: men’s slim, men’s jogger, women’s skinny, and women’s straight. 

Buy Now



from Outside Magazine: All https://ift.tt/2MESfAF

A University Professor Makes First Ascent

Outdoor Brands Speak Out Against Latest China Tariffs

Outdoor gear is about to get more expensive if a proposed tariff on $200 billion in Chinese imports receives U.S. Trade Representative approval. The items on the 195-page list—including ski, bike, and camping gear shipped in from China—would be subject to a tax of anywhere from 10 to 25 percent.

The proposed tariff on these products is collateral damage from a mounting trade dispute with China over alleged intellectual property theft. In May, the USTR approved a 25 percent tariff on $34 billion in Chinese imports, followed by a 25 percent tariff on $16 billion in imports in August. (China has issued its own retaliatory tariffs in response.)

While outdoor companies were largely spared in the first round, the second round, which went into effect Thursday, August 23, hit e-bikes. The latest list, which is currently undergoing public comment before final approval, brings even more outdoor gear under threat: ski gloves, knit hats, helmets, backpacks, candle lanterns, knives, camp chairs, raw wool, bikes, and a long list of bike components including brakes, saddles, forks, frames, and pedals. “Almost every part you need to work on a bike,” says Alex Logemann of the advocacy nonprofit PeopleForBikes.

Last month, Outside reported on the impact such tariffs could have on the bike industry, which imported 99 percent of the 17.8 million bikes sold in the U.S. in 2014, according to a report by the National Bicycle Dealers Association. People For Bikes estimates that 94 percent of complete bikes sold in the U.S. come from China. Ski and camping companies have skin in the game as well, since—according to the Snowsports Industries America (SIA)—knit hats, gloves, helmets, and sports bags (duffels, backpacks, boot bags) accounted for $779 million in sales between August and March 2018.

Representatives from the Outdoor Industry Association, SIA, and PeopleForBikes cited these statistics and more in a public hearing last week, alongside executives from several major outdoor brands, including Specialized, Advanced Sports Enterprises (the company behind Fuji bikes), Bell Sports, SOG (on behalf of a coalition of seven knife companies), and Fitbit. “Raising the tariff to 25 percent could very well put some small, medium-sized companies out of business,” Rich Harper, manager of international trade for the Outdoor Industry Association, said in his statement. “Ultimately, this means outdoor companies will be unable to create new U.S. jobs and, in some cases, may be forced to eliminate existing jobs. It will force some companies to discontinue popular and profitable products and cease the development of new products.” 

PeopleForBikes anticipates retail prices for bikes and bike accessories will go up by at least 25 percent. “For someone buying a $1,000 bike, what they’re able to buy today and what they will be able to buy with the tariffs imposed is going to be different,” says Bob Margevicius, executive vice president of Specialized. “You’re not going to get the same performance or quality for $1,000 anymore.” Rad Power Bikes, a leading manufacturer of e-bikes, raised its prices by several hundred dollars the same day the tariffs went into effect. 

“Either you raise prices and lose business, or you eat the margin and risk running out of cash,” says Brent Merriam, COO of NEMO Equipment. NEMO, SOG, and Industrial Revolution, the company behind UCO camping gear, are all preparing to raise prices on affected products—the Stargaze Recliner chair for NEMO, titanium flatware and candle lanterns for UCO, and a significant portion of SOG’s line of knives—if the tariffs go into effect. Outside reached out to a handful of backpack makers, but none would comment.

For retailers, particularly the ones in winter resort towns that make the bulk of their money during the four-month ski season, price increases could be devastating. “With even a slight increase in price, the sustainability of our industry is in jeopardy,” says Nick Sargent, CEO of SIA, “because a price increase to consumers is fully expected to drive a decline in spending, which will ripple across local communities and tourist-dependent resort towns throughout the United States.” A loophole in customs regulations could exacerbate the problem. The de minimus ruling allows imports under $800 bought by an individual to cross through customs duty-free. In practice, this means cheap gear purchased direct-to-consumer from China via sites like Amazon or Alibaba will escape the 25 percent tariff. Effectively, this could drive customers away from pricier U.S.-made gear, American companies that manufacture their products in China, and already struggling brick-and-mortar stores.

Asked whether the tariffs would be incentive enough to bring manufacturing back to the U.S., most companies say no. “We don’t have the capacity, the technology, or the specialized skill sets to produce in mass volume here,” says Jonathan Wegner of SOG. The tariff on bike components means that the few factories that do make bikes on U.S. soil will actually face higher costs. Scaling up with a new manufacturing facility, domestic or foreign, can take years—Merriam says 2021 is the soonest NEMO would be able to shift over to a new factory—and often requires paying big bucks for new tooling (the specialized equipment used to manufacture technical gear), which Merriam says would cost around $75,000, since brands usually can’t take the machines with them when they change suppliers.

A group of outdoor-industry executives is planning to go to D.C. to plead its case before congressional representatives later this month. Consumers can voice their opinions by submitting written comments through September 5.



from Outside Magazine: All https://ift.tt/2wyIoSd

Nepal's Guides Are Making Big Money on Insurance Scams

In Nepal, there’s a new scam directed at trekkers in the Mount Everest region, and to see how it works you need look no further than the experience of Jessica Reeves.

The Australian told Agence France-Presse that she was trekking with Himalayan Social Journey when she complained to her guide about a common cold. It wasn't an emergency, and certainly not life threatening. But her guide repeatedly urged her to agree to a helicopter rescue.

“They said if I kept going it would be really risky, so it was better to leave now instead of risking it,” she said.

According to Reeves, nine or ten hikers in her group shared a helicopter ride back to a hospital in Kathmandu, but were each told to say they were alone. She thinks that Himalayan Social Journey billed each of the client’s insurance providers for a separate helicopter ride, banking about $35,000 in the process. Another trekker told GearJunkie earlier this month that her partner complained of a mild headache and their guide suggested a helicopter rescue right away, saying they should both take the ride and tell whoever asked that they were feeling very sick. A local helicopter pilot, who rescued trekkers almost daily during the April and May trekking season, told AFP that during that time he flew only three people who actually seemed to be ill. 

As the scam goes, once off the mountain the climbers are taken to hospitals, where they undergo a battery of tests, all billed to their insurance. From mountain to hospital and back, the guides, helicopter companies, and hospitals all take a cut from these false insurance claims. According to AFP and Traveller Assist, a UK-based company that represents international insurers, the high number of helicopter rescues for tourists made 2017 the most expensive year yet in Nepal for insurance companies (though 2018 is on track to outdo it).

Outrage over this widespread scheme prompted a major government crackdown this summer. And last month an investigative committee submitted a 700-page report to Nepalese Tourism Minister Rabindra Adhikari. The report found that 1,300 helicopter rescues took place in the first five months of 2016 and cost insurers more $6.5 million. One of the more concerning findings detailed how some guides served food tainted with baking soda, a known laxative, in order to sicken tourists so they could be pressured into a helicopter rescue. In all, according to the Kathmandu Post, the investigation probed ten helicopter companies, six hospitals, and 36 travel, trekking, and rescue agencies—with further investigation of 15 of these companies recommended. The scamming has become so pervasive that the report advised that all rescue operations be taken over by Nepal’s police. 

The stakes for solving the problem are high. Insurance companies set a September 1 deadline for Nepal to crack down on the abuse, threatening to stop providing coverage for trekkers and climbers if nothing is done. That would have huge ramifications on the country and the people who depend on this work, because tourism is one of Nepal’s main industries.

The country already took a major financial hit after the 2015 magnitude 7.8 earthquake struck. It killed nearly 9,000 people and crumbled homes and buildings. Ever since, tourism has been slow to recover. Meanwhile, there are more than 2,600 trekking agencies competing for this now smaller pool of tourists. So operators lower their rates, which leaves little money left over. 

“We are moving on a price war rather than a service war.” Deepak Joshi, CEO of the Nepal Tourism Board, told GearJunkie. “And that is causing desperate measures.”



from Outside Magazine: All https://ift.tt/2ouvUYc

Customizing Puppeteer Tests - Part 3

In our previous two  posts, we talked about why we switched to Puppeteer and how to get started running tests. Today, we are going to work on customizing tests by passing in custom parameters.

Reasons for Custom Parameters

We need to be able to pass in custom parameters for debugging and local testing. Our tests currently run through Travis CI, but if a developer needs to run the tests locally, the options are not exactly the same.

  • The URL for the test will be different

  • The developer usually needs to debug the tests to determine why they failed

We implemented three custom parameters to help with this problem:

  1. Ability to pass in a custom URL

  2. Ability to run Chrome in a non-headless state

  3. Ability to have screenshots taken of failing tests

We are going to go through all of these custom parameters and learn how to implement them.

Pass in a Custom URL

At Outside, we run our tests on a development Tugboat Environment and our local machines. The two base URLS for these environments differ but the paths to specific pages do not. For example, our local machines point to http://outside.test while our Tugboat environments are unique for each build.

We are going to pass a parameter that looks like this: --url={URL}. For our local site, the full command ends up being npm test -- --url=http://outside.test.

Let's get started in setting this up.

  1. We need to set up a variable that will be accessible across all files that contains the base URL. In bootstrap.js inside the before function, we are going to name the variable baseURL:


before (async function () {
  ...
  global.baseURL = '';
  ...
});

  1. Now we need to access the variables that are passed into the before s function from the command line. In Javascript, these arguments are stored in process.argv. If we console.log them real quick, we can see all that we have access to:


global.baseURL = '';
console.log(process.argv);

  1. Head back to your terminal and run npm test -- --url=http://www.outsideonline.com. You should see an array of values printed:


[ '/usr/local/Cellar/node/10.5.0_1/bin/node',
  'bootstrap.js',
  '--recursive',
  'test/',
  '--timeout',
  '30000',
  '--url=http://www.outsideonline.com' ]

  1. From the above array, we can see that our custom parameter is the last element. But don't let that fool you! We cannot guarantee that the URL will be the last parameter in this array (remember, we have 2 more custom parameters to create). So we need a way to loop through this list and retrieve the URL:

  2. Inside before in bootstrap.js we are going to loop through all the parameters and find the one we need by the url key:


for (var i = 0; i < process.argv.length; i++) {
  var arg = process.argv[i];
  if (arg.includes('--url')) {
    // This is the url argument
  }
}

  1. In the above loop, we set arg to be the current iteration value and then check if that string includes url in it. Simple enough, right?

  2. Now we need to set the global.baseURL to be the url passed in through the npm test command. However, we need to make note that the url argument right now is the whole string --url=www.outsideonline.com. Thus, we need to modify our code to retrieve only www.outsideonline.com. To retrieve only the url, we are going to split the string at the equal sign using the Javascript function split. split works by creating an array of the values before and after the defined string to split at. In our case, splitting --url=www.outsideonline.com with arg.split("=") will return ['--url', 'www.outsideonline.com']. We can then assume the URL will be at the first index of the split array.


if (arg.includes('url')) {
  // This is the url argument
  global.baseURL = arg.split("=")[1];
}

  1. Now that we have our URL, we need to update our tests to use it.

Open up homepage.spec.js and we are going to edit the before function in here:


before (async () => {
  page = await browser.newPage();
  await page.goto(baseURL + '/', { waitUntil: 'networkidle2' });
});

  1. We are also going to keep our test from the previous post on Puppeteer:


it("should have the title", async () => {
  expect(await page.title()).to.eql("Outside Online")
});


  1. Now, if you run the tests with the url added it should work as it previously did! npm test -- --url=https://www.outsideonline.com

  2. Let's create another test to show the value of passing the url through a custom parameter. Inside the test folder, create a file called contact.spec.js. We are going to test the "Contact Us" page found here: https://www.outsideonline.com/contact-us

  3. In this test, we are going to make sure the page has the title "Contact Us" using a very similar method:


describe('Contact Page Test', function() {
  before (async () => {
    page = await browser.newPage();
    await page.goto(baseURL + '/contact-us', { waitUntil: 'networkidle2' });
  });

  it("should have the title", async () => {
    expect(await page.title()).to.eql("Contact Us | Outside Online")
  });
});

As you can see above, using the baseURL, it is very easy to change the page you want to test based on the path. If for some reason we needed to test in our local environment, we only have to change the --url parameter to the correct base URL!

View a Chrome Browser during Tests (non-headless)

Having the ability to visually see the Chrome browser instance that tests are running in helps developers quickly debug any problems. Luckily for us, this is an easy flag we just need to switch between true and false.

  1. The parameter we are going to pass in is --head to indicate that we want to see the browser (instead of passing in --headless which should be the default).

  2. Our npm test script will now look something like this:

npm test -- --url=http://outsideonline.com --head
 

  1. Inside of before in bootstrap.js, we need to update that for loop we created before to also check for the head parameter:


global.headlessMode = true;
for (var i = 0; i < process.argv.length; i++) {
  var arg = process.argv[i];
  if (arg.includes('url')) {
    // This is the url argument
    global.baseURL = arg.split("=")[1];
  }
  if (arg.includes("--head")) {
    global.headlessMode = false;
    // Turn off headless mode.
  }
}

  1. In this instance, we only need to check if the parameter exists to switch a flag! We are using the parameter headlessMode to determine what gets passed into the puppeteerlaunch command:


global.browser = await puppeteer.launch({headless: global.headlessMode});

  1. Lastly, if we are debugging the browser we probably do not want the browser to close after the tests are finished, we want to see what it looks like. So inside the after function in bootstrap.js we just need to create a simple if statement:


if (global.headlessMode) { 
  browser.close();
}

  1. And that's it! Go ahead and run npm test -- --url=http://www.outsideonline.com --head and you should see the tests in a browser!

Take Screenshots of Failing Tests

Our last custom parameter is to help us view screenshots of failing tests. Screenshots can be an important part of the workflow to help quickly debug errors or capture the state of a test. This is going to look very similar to the head parameter, we are going to pass a --screenshot parameter.

  1. Let's again update before in bootstrap.js to take in this new parameter:


if (arg.includes("screenshot")) {
  // Set to debug mode.
  global.screenshot = true;
}

  1. Next up, we are going to implement another mocha function - afterEach. afterEach runs after each test and inside the function, we can access specific parameters about the test. Mainly, we are going to check and see if a test failed or passed. If it failed, we then know we need a screenshot. The afterEach function can go in bootstrap.js because all tests we create will be using this:


afterEach (function() {
  if (global.screenshot && this.currentTest.state === 'failed') {
    global.testFailed = true;
  }
});

  1. After a test has failed, we now has a global testFailed flag to trigger a screenshot in that specific test. Note - bootstrap.js does not have all the information for a test, just the base. We need to let the individual test files know if we need a screenshot of a failed test so we get a picture of the right page.

  2. Head back to homepage.spec.js and we are going to implement and after function.


after (async () => {
  if (global.testFailed) {
    await page.screenshot({
      path: "homepage_failed.png",
      fullPage: true
    });
    global.testFailed = false;
    await page.close();
    process.exit(1);
  } else {
    await page.close();
  }
});

  1. The above function checks if the test has failed based on the testFailed flag. If the test failed, we take a full page screenshot, reset the flag, close the page, and exit the process.

  2. Unfortunately, the above code works best inside each test file so there will be some code duplication across tests. The path setting makes sure that no screenshot overrides another tests screenshot by setting the filename to be the one of the test. The screenshot will be saved in the base directory where we run the npm test command from.

  3. To test and make sure this works, let's edit homepage.spec.js to expect a different title - like "Outside Magazine"


it("should have the title", async () => {
  expect(await page.title()).to.eql("Outside Magazine")
});

  1. We know this one will fail, so when we run npm test -- --url=http://www.outsideonline.com --screenshot we should get a generated screenshot! Look for a file named homepage_failed.png.

Recap & Final Thoughts

Add custom parameters to your npm script is fairly simple once you get the hang of it. From there, you can easily customize your tests based on these parameters. Even with the custom parameters we have created, there is room for improvement. Stricter checking of the parameters would be a good first step to rule out any unintended use cases. With the custom url, headless mode, and screenshots, our tests are now easier to manage and debug if something ever fails. Check out the Puppeteer Documentation, Mocha, and Chai to learn more!



from Outside Magazine: All https://ift.tt/2PqmnN5

Creating Tests with Puppeteer - Part 2

Testing with Puppeteer - Part 1

In a previous post on the Outside Developer Blog, we talked about our development workflow and how it includes a testing process. Over the past couple of months, we’ve been experimenting with making our testing process more efficient and helpful for our developers. In our research, we came across a tool from Google called Puppeteer, "a high level API to control Chrome or Chromium over the DevTools Protocol." In more basic terms, Puppeteer allows you to do anything you would do manually in Chrome but through code. Need a screenshot? Want to test form inputs? Need to test your web speed? Puppeteer can do all that and more.

Our tests used to be built using a tool called Casper that ran on top of a Selenium headless browser. Our experience with Casper has unfortunately been troublesome with tests failing for no apparent reason and inconsistencies across runs. Our tests were becoming so finicky and troublesome that we started commenting out tests that we knew were succeeding in the browser but failed for Casper. We still needed our builds but Casper was not being a reliable source of information for passing and failing tests. This was obviously not a good sign, bad practice, and would lead to trouble down the line.

After experimenting and researching Puppeteer, we arrived at two questions:

  1. Should we change our tests from Casper to Puppeteer?

  2. Would Puppeteer be better and thus worth the switch?

As a team we decided it would at least be worth implementing one of our tests in Puppeteer and viewing the results.

Puppeteer + Mocha + Chai

For our test, we decided that Puppeteer would be the headless browser instance and then Mocha and Chai would help us with assertions. Mocha and Chai are Javascript libraries to help tests determine whether an assertion passes or not. For example, we assert that the homepage has the title "Outside" on it. Mocha runs the test and Chai checks the result versus the expectation and returns true or false. Each test instantiates a headless Chrome instance using Puppeteer and uses Mocha and Chai to run the assertions.

Results

Getting started with Puppeteer, Mocha, and Chai proved to be extremely straightforward and easy to follow. We were able to convert a previously failing Casper test to a working Puppeteer test within a few hours. After we were able to get one test suite running, we worked on converting all of our test to Puppeteer and removing Casper from our process. In this shift, we were able to provide developers with more tools to help debug tests that are failing. Puppeteer has the option to run Chrome in a non-headless state, so a browser window would open up with the test parameters and allow a developer to interact with the test. We also were able to implement a screenshot workflow that takes screenshots of the webpages for any failing test. Both of these options are simple parameters passed to the testing script. Our experience so far has been happy and successful and we look forward to diving deeper into Puppeteer.

Be sure to check out Part 2 to learn how we implemented Puppeteer, Mocha, and Chai to create our new test suite.



from Outside Magazine: All https://ift.tt/2wyIRDX

Dinner with Sasha DiGiulian