Subscribe to Drupal News feed
Drupal.org - aggregated feeds in category Planet Drupal
Updated: 8 hours 2 min ago

Lullabot: Tom Grandy on Backdrop, Drupal, and Education

Sat, 01/27/2018 - 11:04
In this episode, Matthew Tift talks with Tom Grandy, who oversees websites for 23 school districts. Tom describes himself as a journalist, a teacher, and a non-coder who helps out with documentation and marketing for Backdrop. He describes his experiences using proprietary software, finding Drupal, his involvement with Backdrop, and the challenges of using free software in K-12 education. Tom shares why people working in schools make decisions about technology most often based on cost, but that he believes we should also considers software licenses, communities, and other more philosophical factors.

Frederic Marand: Tip of the day: how to debug Composer scripts with XDebug and PhpStorm

Sat, 01/27/2018 - 05:51
The problem: XDebug doesn't work for Composer scripts

PhpStorm is quite convenient to debug scripts with XDebug (do you support Derick for giving us XDebug ?): just add a "Run/Debug configuration", choosing the "PHP Script" type, give a few parameters, and you can start debugging your PHP CLI scripts, using breakpoints, evaluations, etc.

Wonderful. So now, let's define such a configuration to debug a Composer script, say a Behat configuration generator from site settings for some current Drupal 8 project. Apply the configuration, run it in debug mode, and ....

...PhpStorm doesn't stop, the script runs and ends, and all breakpoints were ignored. How to actually use breakpoints in the IDE ?

read more

TIP Solutions: Adding social media feed to website - or not?

Fri, 01/26/2018 - 08:04

When adding a social media (SoMe) feed or "some wall" like here on the right bottom corner to your webpage the big question is: Why would you put it to your site? Why is it there?

If the answer is something like "to get some content to your site." - consider again. The SoMe feed might benefit you or harm you depending how you manage it.

Here are some points to consider:

Social media SoMe Drupal 8 SoMe wall Planet Drupal

Love Huria: Emboss your footprints in the Drupal Sand - Drupal Camp Goa 2018, Call for sessions

Thu, 01/25/2018 - 19:00

We are thrilled to bring you the most exciting event of this year Drupal Camp Goa 2018! You would be ecstatic to be a part of something really big that’s happening in India’s most sought-after destination, Goa. It’s a shoutout for all of you who love developing and would like to extend their immense support to the web’s leading content management system (CMS), Drupal!

Why should Drupalers have all the fun!

This time it is not just Drupal, we are exploring beyond it. Join us to share your knowledge on topics like...

Lullabot: Rock and a Hard Place: Changing Drupal.org Tooling

Thu, 01/25/2018 - 18:17
Matt and Mike talk with the Drupal Association's Tim Lehnen and Neil Drumm about the changes to Drupal.org's tooling.

Platform.sh: HHVM deploys off into the sunset

Thu, 01/25/2018 - 15:58
HHVM deploys off into the sunset Crell Thu, 01/25/2018 - 20:58 Blog

We always aim to offer our customers the best experience possible, with the tools they want to use. Usually that means expanding the platforms and languages we support (which now stands at six languages and counting), but occasionally it means dropping tools that are not being used so that we can focus resources on those that are.

For that reason, we will be dropping support for the HHVM runtime on 1 March 2018.

HHVM began life at Facebook as a faster, more robust PHP runtime. Although it never quite reached 100% PHP compatibility it got extremely close, and did see some success and buy-in outside of Facebook itself. Its most notable achievement, however, was providing PHP itself with much-needed competition, which in turn spurred the work that resulted in the massive performance improvements of PHP 7.

Similarly, Facebook's "PHP extended" language, Hack (which ran on HHVM), has seen only limited use outside of Facebook itself but served as a test bed and proving ground for many improvements and features that have since made their way into PHP itself. Like HHVM itself, though, Hack never achieved critical mass in the marketplace outside of Facebook.

Back in September, Facebook announced that they would be continuing development of Hack as its own language, and not aiming for PHP compatibility. Essentially Hack/HHVM will be a "full fork" of the PHP language and go its own way, and no longer try to be a drop-in replacement for PHP.

Platform.sh has offered HHVM support as a PHP alternative for several years, although as in the broader market it didn't see much use and with the release of PHP 7 the performance advantage of HHVM basically disappeared, leading people to migrate back to vanilla PHP 7. Looking at our own statistics, in fact, we recently found that HHVM was virtually unused on our system.

"Give the people what they want" also means not giving them what they clearly don't want, and the PHP market clearly doesn't want HHVM at this point. We will therefore be dropping support for it on 1 March. If Hack/HHVM develops its own market in the future and there's demand for it we may look into re-adding it at that time, but we'll wait and see.

Good night, sweet HHVM, and may a herd of ElePHPants sing thee to they REST!

Larry Garfield 30 Jan, 2018

Promet Source: How to Set Up Responsive Images in Drupal 8

Thu, 01/25/2018 - 13:56
Responsive images are great! If I wanted to quickly introduce what responsive images are to some, I would say: On mobile? Small images. Tablet? Medium images. Desktop? Large images. This article is a complete "how to" in setting up responsive images in Drupal 8.  If you are using Drupal 7, check out my previous article here: Picture Module: Building Responsive Images in Drupal 7.

Phase2: Decoupled Drupal: A Guide for Marketers

Thu, 01/25/2018 - 12:58

If you are considering a move to Drupal 8, or upgrading your current Drupal platform, it’s likely that you’ve come across the term “decoupled Drupal”, aka “headless Drupal”. But do you know what it means and what the implications of decoupled Drupal are for marketers? In this guide we will define decoupled Drupal and share three reasons why marketers should consider a decoupled architecture as they evolve their digital experience platforms.

Acro Media: Drupal Commerce 2: Set up Product Attributes using Rendered Fields

Thu, 01/25/2018 - 11:53

In part one of this Acro Media Tech Talk video series, we covered how you set up a new product attribute in Drupal Commerce 2. A product attribute is used to define options that customers would select when buying a product. For example, a hat might have various sizes (small, medium, large) and colours available. These are attributes.

In part two, we'll now take a colour attribute that was set up in part one, but change it into a "rendered attribute". By default, the customer would select the option by seeing the name of the colour. A rendered attribute lets us instead show a colour swatch. So, instead of seeing the work "blue", the customer would see the actual colour. Cool!

This entire video series, when complete, will show you how to set up a new product in Drupal Commerce 2, from start to finish. The video is captured using our Urban Hipster Commerce 2 demo site.

Next week we'll post part 3: Set up a Product Variation Type with Custom Fields

Its important to note that this video was recorded before the official 2.0 release of Drupal Commerce and so you may see a few small differences between this video and the official release now available.

Urban Hipster Commerce 2 Demo site

This video was created using the Urban Hipster Commerce 2 demo site. We've built this site to show the adaptability of the Drupal 8, Commerce 2 platform. Most of what you see is out-of-the-box functionality combined with expert configuration and theming.

More from Acro Media Drupal modules used in this video

Drupal Association blog: DrupalCon Nashville and Tennessee’s Discrimination standing

Thu, 01/25/2018 - 10:51

As many already know, DrupalCon North America 2018 will be held in Nashville, TN. The Drupal Association puts a lot of time and effort into choosing a site for DrupalCon North America - a two to three year process that involves request for proposals, several rounds of interviews, site visits and contract negotiations. We do not take this lightly and we include both logistically important and socially relevant questions for review.

Unfortunately, sometimes things happen outside of our control, despite our great lengths of planning. In April 2016, after a 5-month RFP and interview process, we signed a contract with the City of Nashville to host DrupalCon North America 2018. A few weeks later, the State of Tennessee introduced and passed a new law that Drupal Association does not support, and as many community members have pointed out - prevents public employees from the State of California from attending DrupalCon if sponsored by their employer.

For those who have asked, the timeline of events transpired as follows:

  • April 2016: Drupal Association contracted with Nashville, TN to host DrupalCon North America 2018
  • Early May 2016: Tennessee enacted the Amendment Senate Bill No. 1556 House Bill No. 1840
  • January 2017: California enacted restrictions banning state sponsored travel to TN in response to SB1556/HB1840.

Specifically, May 2, 2016. SB1556/HB1840 as enacted, declares that no person providing counseling or therapy services will be required to counsel or serve a client as to goals, outcomes, or behaviors that conflict with the sincerely held principles of the counselor or therapist; requires such counselor or therapist to refer the client to another counselor or therapist; creates immunity for such action; maintains liability for counselors who will not counsel a client based on the counselor's religious beliefs when the individual seeking or undergoing the counseling is in imminent danger of harming themselves or others.

It is unfortunate that this bill became law. The Nashville Convention & Visitors Corporation, who we worked with to contract DrupalCon Nashville, and the greater Nashville business community including the Nashville Mayor’s office believe discrimination has no place in their home state.

In response to this bill and in anticipation of other potential discrimination bills in the future, Nashville Convention & Visitors Corporation became a founding and leading member of Tennessee Thrives, a business coalition of now more than 400 companies across Tennessee who believe that in order for Tennessee businesses and communities to thrive they must be diverse and welcoming for all people, regardless of race, sex, national origin, ethnicity, religion, age, disability, sexual orientation or gender identity. You can read more here about Tennessee Thrives and the Nashville Metro area’s history of social advancements, as well as a statement from the Nashville Convention and Visitors Corporation.

Here is the Tennessee Thrives pledge:

We believe that equal treatment of all Tennesseans and visitors is essential to maintaining Tennessee’s strong brand as a growing and exciting home for business innovation, economic development, a best-in-class workforce, and dynamic entertainment, travel and tourism industries.

In order for Tennessee businesses to compete for top talent, we believe our workplaces and communities must be diverse and welcoming for all people, regardless of race, sex, national origin, ethnicity, religion, age, disability, sexual orientation or gender identity.

As signers of the Tennessee Thrives pledge, we are committed to promoting an attractive, prosperous, and economically vibrant Tennessee. A united Tennessee is a thriving Tennessee.

Tennessee Thrives identified 12 discriminatory bills that were filed in the General Assembly in 2017, and with their efforts only two were approved.

As a further measure of welcome for our Drupal community, the Mayor of Nashville, has extended a Statement of Welcome to the DrupalCon community. They are very excited that DrupalCon has chosen Nashville as their 2018 North American location, and hope we can see past the politics of the larger state to see the welcoming intent of the City of Nashville.

In response to the Drupal community concerns with Nashville as a DrupalCon city, the Nashville Convention & Visitors Corporation offered this statement:

Nashville is an open, welcoming city that respects and embraces the differences among us. We believe that our differences make our community stronger. A sampling of Nashville’s social advancements in contradiction to the actions of TN legislature include:

  • In 2016, the Metro Nashville Council unanimously voted to approve a resolution asking the state legislature to oppose bills opposing the U.S. Supreme Court’s decision on marriage equality. The resolution’s lead co-sponsor was Councilwoman Nancy Van Reese, who is openly gay.
  • On March 21, 2016, Mayor Megan Barry issued an executive order requiring training of all employees of the Metropolitan Government in diversity issues and sexual harassment awareness and prevention.
  • In May, 2016, Nashville hosted the International Gay Rugby Bingham Cup. Mayor Megan Barry served on the Host Committee to bring the Bingham Cup to Nashville.
  • While a mayoral candidate, Mayor Megan Barry officiated the first same-sex marriage in Nashville just hours after the Supreme Court ruled that same-sex marriage is allowed in all 50 states. (During her inauguration in September, 2015, Mayor Barry invited Nashville in Harmony to perform. The group is Tennessee’s first and only musical arts organization specifically created for gay, lesbian, bisexual, and transgender people – and their straight allies. The group performed at events hosted by the previous Nashville Mayor, as well.)
  • While a mayoral candidate, Mayor Megan Barry received the Ally Award from the Nashville LGBT Chamber of Commerce in 2015.
  • In 2011, Nashville extended nondiscrimination protections to employees of the city and contractors.  (Unfortunately, state government nullified the local decision.)
  • In 2009, the Metro Nashville Council passed an ordinance that protects Metro employees from discrimination based on their sexual orientation or gender identity. (Sponsored by then Council Member-At-Large Megan Barry, who now serves as Mayor of Nashville)
  • In 2008, the Metro Nashville School Board approved sexual orientation and gender identity protections for students and staff.

For those concerned about a Tennessee Bathroom Bill, please know that Tennessee has never passed the bathroom bill, it gets killed in process every time it comes up for a vote, including this past March. There is no “Bathroom Bill” in the state of Tennessee. There are also all-gender restrooms offered at the Nashville Music City Center for use during DrupalCon. We understand people's concern with a state that submits this kind of law for consideration. We can possibly all relate to the idea that the actions of lawmakers are not always representative of the greater population, particularly in the greater population of a metro area, and Nashville shares this same concern.

At our core, the Drupal Association believes in community, collaboration, and openness. We work hard throughout the process of DrupalCon planning to be sure that not only the complicated logistics are addressed, but also an accessible space for everyone in our community to feel safe, welcome and comfortable.

In addition to our core DrupalCon programming, we also include the following services at DrupalCon for those who need it.

  • Our Code of Conduct
  • Registration grants and scholarships
  • Interpreters (for the hard of hearing)
  • Special meals: Kosher, Halal, vegan, vegetarian, gluten-free, etc
  • New mother’s room
  • Quiet room and prayer space
  • Venue accessibility and mobility assistance
  • Local AA Meeting information
  • Speaker inclusion fund
  • No-photograph lanyards and communication preference stickers
  • All-gender restrooms
  • Women in Drupal events
  • Inclusion BOFs
  • On-site contacts for incident reporting

You can learn more about all of these services on our DrupalCon Nashville website under On-site Resources.

We believe, despite the current legislative challenges that the City of Nashville is working to overcome at a state level, that we will have a safe, diverse, celebratory space for our community in Nashville this spring. We’re excited to bring DrupalCon to the city of Nashville, and we’re confident it will be an amazing event.

We want to hear about your experiences at DrupalCon and in the cities we visit. Please participate in our post-Con surveys so that we can follow with both our internal teams and host cities if there are areas where the events can be improved for attendees.

aleksip.net: Data inheritance in Pattern Lab

Thu, 01/25/2018 - 07:26
When Pattern Lab renders a pattern, it does not by default include the data for any included patterns. There are plugins that can be used to include this data, but the many different ways to include patterns within another and to implement data inheritance can cause confusion.

erdfisch: Drupalcon mentored core sprint - part 3 - what happens next?

Thu, 01/25/2018 - 06:36
Drupalcon mentored core sprint - part 3 - what happens next? 25.01.2018 Michael Lenahan Body: 

Hi there! This is the third and final part of a series of blog posts about the Drupal Mentored Core Sprint, which traditionally takes place every Friday at Drupalcon.

If you want to read what came before, here you go:
Part one is here
Part two is here

In this blog post, I would like to show you a little of what happens behind the scenes at the Drupalcon Friday contribution sprint.

The live core commit

The day is completed by the core live commit. This is where one issue that was worked on during the day is committed to Drupal's git repository.

In Vienna, the issue that got committed was https://www.drupal.org/node/2912636, the contributors on Friday were gido and wengerk. They were mentored by the wonderful valthebald, who we met in part two.

This is the moment, when lauriii committed the code to the 8.5.x branch of Drupal, ably assisted by webchick:

Here's the thing about the live commit: anybody in the room could have been up there on stage. Behind the scenes, the mentoring team has been working hard with the core committers to ensure that a commit can be safely made. This is a difficult task: Drupal is a complicated system, it's interesting to see just how much thought needs to go into a seemingly simple commit.

Below is a list of some other issues that were worked on during the Friday sprint at Vienna. Some have since been committed, others still being worked on, even now. The point here is that progress was made on these issues and new contributors helped to move them forward (take a look at what happened in these issues on 29 September, 2017):

Coding Standards
DbLog erroring
SettingsTray disappearing
Add @internal to Form classes
Table drag
Batch missing title on screen
Url alias for private file uploads
Remove #size
Views DISTINCT multilingual
Toolbar uncacheable page
spelling"therefor"

The live commit is a chance for us to celebrate the success of one team, but really all those who worked on the issues above deserve to be celebrated. Our measure for how successful the day has been is whether or not the participants return to the issues after the day is over, and keep using their contribution skills.

Sign up to be a mentor

Are you coming to Nashville? Are you thinking, "maybe I have the skills to be a mentor"? That's great!

Sign up to be a mentor here.

After that, you will get regular emails with instructions on how to prepare for the Mentored Core Sprint.

Don't feel that you need to know the answers to everything in order to be a mentor. You will always have other mentors around you, people you can ask for help when you get stuck.

In the Mentored Core Sprint, we are using a really well-tested process, which we have refined and improved over many years.

The key thing to remember is this: you don't need to fix the issue for the participants. Your job is to teach them how the issue queue works.

Understanding the value of finding the solution is far more important than finding the solution itself.

What to do at Drupalcon

In the exhibition hall, there is a Mentors' Table. Go and say hello, it's a good place to hang out. We have stickers for you, and mentoring cards explaining all the different tasks on offer ...

Keep an eye out on the BoFs board during the week. There are special meetings to prepare first-time mentors, plus a meeting to do issue triage to determine good Novice issues.

Here's a clue: Novice does NOT mean trivial or easy. It means that the steps on the issue are well-defined, and actionable.

Then, show up bright and early on sprint day and have a great time

You'll be wearing the best t-shirt in town.

Here is Rachel, briefing the team before the day starts.

Every year, after it's all over, we meet at a nice restaurant for the mentors' dinner. Thank you to some wonderful companies in the Drupal Community sponsored us last September in Vienna.

So, that's a wrap!

There's a lot more to be said on this topic, but I'll leave it there. I hope I've been able to persuade you to give the Friday core sprint a try, as a participant or as a mentor. It's worth it.

If you're going to Nashville (lucky you), then make sure you stay for the Friday as well.

We're currently planning Drupal Europe. We will most definitely include a Mentored Contribution Day! See you there!

Credit to Amazee Labs and Roy Segall for use of photos from the Drupalcon Vienna flickr stream, made available under the CC BY-NC-SA 2.0 licence.

Schlagworte/Tags:  planet drupal-planet drupalcon mentoring code sprint Ihr Name Kommentar/Comment Kommentar hinzufügen/Add comment Leave this field blank

Lullabot: Local Drupal Development Roundup

Wed, 01/24/2018 - 15:35

If you’d asked me a decade ago what local setup for web development would look like, I would have guessed “simpler, easier, and turn-key”. After all, WAMP was getting to be rather usable and stable on Windows, Linux was beginning to be preinstalled on laptops, and Mac OS X was in its heyday of being the primary focus for Apple.

Today, I see every new web developer struggle with just keeping their locals running. Instead of consolidation, we’ve seen a multitude of good options become available, with no clear “best” choice. Many of these options require a strong, almost expert-level of understanding of *nix systems administration and management. Yet, most junior web developers have little command line experience or have only been exposed to Windows environments in their post-secondary training.

What’s a developer lead to do? Let's review the options available for 2018!

1. The stack as an app: *AMP and friends

In this model, a native application is downloaded and run locally. For example, MAMP contains an isolated stack with Apache, PHP, and MySQL compiled for Windows or macOS. This is by far the simplest way to get a local environment up and running for Mac or Windows users. It’s also the easiest to recover from when things go wrong. Simply uninstall and reinstall the app, and you’ll have a clean slate.

However, there are some significant limitations. If your PHP app requires a PHP extension that’s not included, adding it in by hand can be difficult. Sometimes, the configuration they ship with can deviate from your actual server environments, leading to the “it works on my local but nowhere else” problem. Finally, the skills you learn won’t apply directly to production environments, or if you change operating systems locally.

2. Native on the workstation

This style of setup involves using the command line to install the appropriate software locally. For example, Mac users would use Homebrew and Homebrew-PHP to install Apache, PHP, and MySQL. Linux users would use apt or yum - which would be similar to setting up on a remote server. Windows users have the option of the Linux subsystem now available in Windows 10.

This is slightly more complicated than an AMP application as it requires the command line instead of using a GUI dashboard. Instead of one bundle with “everything”, you have to know what you need to install. For example, simply running apt install php won’t give you common extensions like gd for image processing. However, once you’ve set up a local this way, you will have immediately transferable skills to production environments. And, if you need to install something like the PHP mongodb or redis extensions, it’s straightforward either through the package manager or through pecl.

Linux on the Laptop

Running a Linux distribution as your primary operating system is a great way to do local development. Everything you do is transferable to production environments, and there are incredible resources online for learning how to set everything up. However, the usual caveats around battery life and laptop hardware availability for Linux support remain.

3. Virtual Machines

Virtual machines are actually really old technology—older than Unix itself. As hardware extensions for virtualization support and 4GB+ of RAM became standard in workstations, running a full virtual machine for development work (and not just on servers) became reasonable. With 8 or 16GB of memory, it’s entirely reasonable to run multiple virtual machines at once without a noticeable slowdown.

VirtualBox is a broadly used, free virtual machine application that runs on macOS, Linux, and Windows. Using virtual machines can significantly simplify local development when working on significantly different sites. Perhaps one site is using PHP 5.6 with MySQL, and another is using PHP 7.1 with MariaDB. Or, another is running something entirely different, like Ruby, Python, or even Windows and .Net. Having virtual machines lets you keep the environment separate and isolated.

However, maintaining each environment can take time. You have to manually copy code into the virtual machine, or install a full environment for editing code. Resetting to a pristine state takes time.

Vagrant

Clearly, there were advantages in using virtual machines—if only they were easier to maintain! This is where Vagrant comes in. For example, instead of spending time adding a virtual machine with a wizard, and manually running an OS installer, Vagrant makes initial setup as easy as vagrant up.

Vagrant really shines in my work as an architect, where I’m often auditing a few different sites at the same time. I may not have access to anything beyond a git repository and a database dump, so having a generic, repeatable, and isolated PHP environment is a huge time saver.

Syncing code into a VM is something Vagrant handles out of the box, with support for NFS on Linux and macOS hosts, SMB on Windows hosts, and rsync for anywhere. This saves from having to maintain multiple IDE and editor installations, letting those all live on your primary OS.

Of course, someone has to create the initial virtual machine and configure it into something called a “base box”. Conceptually, a base box is what each Vagrant project forks off of, such as ubuntu/zesty. Some developers prefer to start with an OS-only box, and then use a provisioning tool like Ansible or Puppet to add packages and configure them. I’ve found that’s too complicated for many developers, who just want a straightforward VM they can boot and edit. Luckily, Vagrant also supports custom base boxes with whatever software you want baked in.

For Drupal development, there’s DrupalVM or my own provisionless trusty-lamp base box. You can find more base boxes on the Vagrant website.

4. Docker

In many circles, Docker is the “one true answer” for local development. While Docker has a lot of promise, in my experience it’s also the most complicated option available. Docker uses APIs that are part of the Linux kernel to run containers, which means that Docker containers can’t run straight under macOS or Windows. In those cases, a lightweight virtual machine is run, and Docker containers are run inside of that. If you’re already using Docker in production (which is its own can of worms), then running Docker for locals can be a huge win.

Like a virtual machine, somehow your in-development code has to be pushed inside of the container. This has been a historical pain point for Docker, and can only be avoided by running Linux as your primary OS. docker-sync is probably the best solution today until the osxfs driver gets closer to native performance. Using Linux as your primary operating system will give you the best Docker experience, as it can use bind mounts which have no performance impact.

I’ve heard good things about Kalabox, but haven’t used it myself. Kalabox works fine today but is not being actively developed, in favor of Lando, a CLI tool. Pantheon supports taking an existing site and making it work locally through a Kalabox plugin. If your hosting provider offers tooling like that, it’s worth investigating before diving too deeply into other options.

I did some investigation recently into docker4drupal. It worked pretty well in my basic setup, but I haven’t used it on a real client project for day-to-day work. It includes many optional services that are disabled out of the box but may be a little overwhelming to read through. A good strategy to learn how Docker works is to build a basic local environment by hand, and then switch over to docker4drupal to save having to maintain something custom over the long run.

ddev is another “tool on top of docker” made by a team with ties to the Drupal community. It was easy to get going for a basic Drupal 8 site. One interesting design decision is to store site files and database files outside of Docker, and to require a special flag to remove them. While this limits some Docker functionality (like snapshotting a database container for update hook testing), I’ve seen many developers lose an hour after accidentally deleting a container. If they keep focusing on these common pain points, this could eventually be one of the most friendly Docker tools to use.

One of the biggest issues with Docker on macOS is that by default, it stores all containers in a single disk image limited to 64GB of space. I’ve seen developers fill this up and completely trash all of their local Docker instances. Deleting containers often won’t recover much space from this file, so if your Mac is running out of disk space you may have to reset Docker entirely to recover the disk space.

When things go wrong, debugging your local environment with Docker requires a solid understanding of an entire stack of software: Shells in both your host and your containers, Linux package managers, init systems, networking, docker-compose, and Docker itself.

I have worked with a few clients who were using Docker for both production and local development. In one case, a small team with only two developers ended up going back to MAMP for locals due to the complexity of Docker relative to their needs. In the other case, I found it was faster to pull the site into a Vagrant VM than to get their Docker containers up and running smoothly. What’s important is to remember that Docker doesn’t solve the scripting and container setup for you—so if you decide to use Docker, be prepared to maintain that tooling and infrastructure. The only thing worse than no local environment automation is automation that’s broken.

At Lullabot, we use Docker to run Tugboat, and for local development of lullabot.com itself. It took some valiant efforts by Sally Young, but it’s been fairly smooth since we transitioned to using docker-sync.

What should your team use?

Paraphrasing what I wrote over in the README for the trusty-lamp basebox:

Deciding what local development environment to choose for you and your team can be tricky. Here are three options, ordered in terms of complexity:

  1. Is your team entirely new to PHP and web development in general? Consider using something like MAMP instead of Vagrant or Docker.
  2. Does your team have a good handle on web development, but are running into the limitations of running the site on macOS or Windows? Does your team have mixed operating systems including Windows and Linux? Consider using Vagrant to solve all of these pain points.
  3. Is your team using Docker in production, or already maintaining Dockerfiles? If so, consider using docker4drupal or your production Docker containers locally.

Where do you see local development going in 2018? If you had time to completely reset from scratch, what tooling would you use? Let us know in the comments below.

myDropWizard.com: Use the Backup and Migrate module in Drupal 6? Audit your permissions!

Wed, 01/24/2018 - 14:20

As you may know, Drupal 6 has reached End-of-Life (EOL) which means the Drupal Security Team is no longer doing Security Advisories or working on security patches for Drupal 6 core or contrib modules - but the Drupal 6 LTS vendors are and we're one of them!

Today, a security update for the Backup and Migrate module for Drupal 7 was released for a Critical issue that could allow arbitrary PHP execution - see the security advisory.

While arbitrary PHP execution is scary, this issue is actually about the permissions provided by the Backup and Migrate module not being marked as potentially dangerous. The new release simply marks those permissions appropriately.

There won't be a security release for this issue for Drupal 6!

This is because Drupal 6 doesn't provide a way to mark permissions as dangerous. It doesn't even allow a separate description for the permissions, which we could use to call out the danger (the machine name used in code is the same as the name shown to users - this is no longer the case in Drupal 7 and newer).

However, marking the permissions as dangerous isn't the real fix! The real fix is auditing your permissions to "verify only trusted users are granted permissions defined by the module."

This is something you can do with Drupal 6, even without a new release. :-)

So, in summary: no security release for Drupal 6 - go audit your permissions.

If you'd like all your Drupal 6 modules to receive security updates and have the fixes deployed the same day they're released, please check out our D6LTS plans.

Note: if you use the myDropWizard module (totally free!), you'll be alerted to these and any future security updates, and will be able to use drush to install them (even though they won't necessarily have a release on Drupal.org).

Daniel Pocock: apt-get install more contributors

Wed, 01/24/2018 - 06:21

Every year I participate in a number of initiatives introducing people to free software and helping them make a first contribution. After all, making the first contribution to free software is a very significant milestone on the way to becoming a leader in the world of software engineering. Anything we can do to improve this experience and make it accessible to more people would appear to be vital to the continuation of our communities and the solutions we produce.

During the time I've been involved in mentoring, I've observed that there are many technical steps in helping people make their first contribution that could be automated. While it may seem like creating SSH and PGP keys is not that hard to explain, wouldn't it be nice if we could whisk new contributors through this process in much the same way that we help people become users with the Debian Installer?

Paving the path to a first contribution

Imagine the following series of steps:

  1. Install Debian
  2. apt install new-contributor-wizard
  3. Run the new-contributor-wizard (sets up domain name, SSH, PGP, calls apt to install necessary tools, procmail or similar filters, join IRC channels, creates static blog with Jekyll, ...)
  4. write a patch, git push
  5. write a blog about the patch, git push

Steps 2 and 3 can eliminate a lot of "where do I start?" head-scratching for new contributors and it can eliminate a lot of repetitive communication for mentors. In programs like GSoC and Outreachy, where there is a huge burst of enthusiasm during the application process (February/March), will a tool like this help a higher percentage of the applicants make a first contribution to free software? For example, if 50% of applicants made a contribution last March, could this tool raise that to 70% in March 2019? Is it likely more will become repeat contributors if their first contribution is achieved more quickly after using a tool like this? Is this an important pattern for the success of our communities? Could this also be a useful stepping stone in the progression from being a user to making a first upload to mentors.debian.net?

Could this wizard be generic enough to help multiple communities, helping people share a plugin for Mozilla, contribute their first theme for Drupal or a package for Fedora?

Not just for developers

Notice I've deliberately used the word contributor and not developer. It takes many different people with different skills to build a successful community and this wizard will also be useful for people who are not writing code.

What would you include in this wizard?

Please feel free to add ideas to the wiki page.

All projects really need a couple of mentors to support them through the summer and if you are able to be a co-mentor for this or any of the other projects (or even proposing your own topic) now is a great time to join the debian-outreach list and contact us. You don't need to be a Debian Developer either and several of these projects are widely useful outside Debian.

miggle: learning Drupal in a week - my first job experience

Wed, 01/24/2018 - 04:38
learning Drupal in a week - my first job experiencefriends of miggle Wed, 24/01/2018 - 09:38 Upon arriving I was welcomed to the office and settled in at a desk. Initially, I was tasked with exploring Drupal and what it could do. Acquia Dev Desktop was the first application I opened and after experimenting with some of the prebuilt sites I began to gather an understanding of Drupal and why it is used. 

INsReady: Single Sign-on using OAuth2 and JWT for Distributed Architecture

Wed, 01/24/2018 - 00:35

Single sign-on (SSO) is a property, where a user logs in with a single ID and password to gain access to a connected system or systems without using different usernames or passwords, or in some configurations seamlessly sign on at each system. A simple version of single sign-on can be achieved over IP networks using cookies but only if the sites share a common DNS parent domain. ---- https://en.wikipedia.org/wiki/Single_sign-on

As the definition suggests, one can imagine that SSO becomes one critical part of the system design and user experience design for complex and distributed system, or for a new application to integrate with the existing connected system. With SSO enabled, a system owner can manage access control at a centralized place, therefore granting users permissions cross multiple subsystem is organized. On the other hand, as an end user, he/she only needs to secure one set of credentials to access multiple resources or to access functionalities whose distributed architecture is hidden from the user.

As we entering 2018, our software becomes more complex and its services become more ubiquitous. Let's use Google's SSO for example to illustrate the demand for a modern SSO:

  • A user can sign in with password once for both Gmail.com and YouTube.com
  • A user can go to Feedly.com or New York Times and use the "Sign-in with Google" to authorize third parties to access the user's data
  • A user can sign in with password on a mobile device to sync all photos or contacts from Google
  • A Google Home device can connect to multiple people's Google accounts, and read out their calendar events when needed
  • YouTube.com developers can use Polymer as frontend technology, and authenticate with YouTube.com backend to load the content via web services API

You might not realize the complexity of such system to support the modern use cases above until your system needs one, and you need to develop the support. Let's translate the above use cases into SSO technical requirements:

  • Support SSO cross multiple domains
  • Support Password Grant (sing-in directly on the web), Authorization Code Grant (user authorizes third-party), Client Credentials Grant (Machine sign-in), and Implicit Grant (third-party web app sign-in)
  • Support distributed architecture, where your authentication server is not necessary on the same domain or at the same server as your resource servers
  • Web services API on resources server can effectively authenticate requests
  • No technology lock-in for authentication server, resource servers as well as client-side apps.
  • Support a seamless user authorization experience cross different client-side technology (Web, Mobile or IoT), and cross different first-party and third-party applications

Fortunately, we can leverage existing open standards and open source software to implement a SSO for a distributed system. First, we will rely on OAuth 2.0 Authorization Framework and JSON Web Token (JWT) open protocols. OAuth 2.0 is used to support common authentication workflows; in fact, the above 4 types of grants in the requirements are the terminologies borrowed from OAuth 2.0 protocol. JWT protocol is used to standardize the sharing of a successful authentication result cross clients apps and resources servers. The protocol allows resources server to trust a client request without double checking with authentication server, which lowers the amount of communication within a distributed system, therefore increases the performance of overall authentication and identification. For more technical details on how to use OAuth 2.0 and JWT for authentication, please see Stateless authentication with OAuth 2 and JWT - JavaZone 2015.

Regarding to building the authentication sever, where all users and machines will sign-in, authenticate, authorize, or identify themselves, the critical requirement for the authentication server is that this server implements OAuth 2.0 protocol and use JWT as the bearer token. As long as the authentication server implements the protocols, the rest of facilitating features can be built on any technology. I like use simple_oauth module with Drupal 8, because out-of-box, this solution is the whole application, including users, consumers and tokens management. Particularly, I have been helping to optimize the user experience of user authorization process for different use cases. If you are not familiar with Drupal, a particular distribution Contenta CMS has pre-packaged simple_oauth and its dependencies for you.

Once the authentication server is in place, we will implement the protocol and workflows on resource server and client-side apps. This part is largely up to your resource server and client-side technologies you picked. We are building this part of integration with Node.js, Laraval, Drupal 7 and Drupal 8 applications. As the time of writing, we have published the module oauth2_jwt_sso on Drupal 8.

I leave the extensibility, limitation, and more technical details of this SSO solution for the upcoming DrupalCon Nashville session. I will include the session video here in late Apri, 2018.

Files:  SSO diagram.pngTag: SSOOAuth2JWTDecoupledDistributedArchitectureSecurityDrupal Planet

PreviousNext: Better image optimisation in Drupal

Tue, 01/23/2018 - 22:08

When optimising a site for performance, one of the options with the best effort-to-reward ratio is image optimisation. Crunching those images in your Front End workflow is easy, but how about author-uploaded images through the CMS?

by Tony Comben / 24 January 2018

Recently, a client of ours was looking for ways to reduce the size of uploaded images on their site without burdening the authors. To solve this, we used the module Image Optimize which allows you to use a number of compression tools, both local and 3rd party.

The tools it currently supports include:

We decided to avoid the use of 3rd party services, as processing the images on our servers could reduce processing time (no waiting for a third party to reply) and ensure reliability.

Picking your server-side compression tool

In order to pick the tools which best served our we picked an image that closely represented the type of image the authors often used. We picked an image featuring a person’s face with a complex background - one png and one jpeg, and ran it through each of the tools with a moderately aggressive compression level.

PNG Results Compression Library Compressed size Percentage saving Original (Drupal 8 default resizing) 234kb - AdvPng 234kb 0% OptiPng 200kb 14.52% PngCrush 200kb 14.52% PngOut 194kb 17.09% PngQuant 63kb 73.07% Compression Library Compressed size Percentage saving Original 1403kb - AdvPng 1403kb 0% OptiPng 1288kb 8.19% PngCrush 1288kb 8.19% PngOut 1313kb 6.41% PngQuant 445kb 68.28% JPEG Results Compression Library Compressed size Percentage saving Original (Drupal 8 default resizing) 57kb - JfifRemove 57kb 0% JpegOptim 49kb 14.03% JpegTran 57kb 0% Compression Library Compressed size Percentage saving Original 778kb - JfifRemove 778kb 0% JpegOptim 83kb 89.33% JpegTran 715kb 8.09%

Using a combination of PngQuant and JpegOptim, we could save anywhere between 14% and 89% in file size, with larger images bringing greater percentage savings.

Setting up automated image compression in Drupal 8

The Image Optimize module allows us to set up optimisation pipelines and attach them to our image styles. This allows us to set both site-wide and per-image style optimisation.

After installing the Image Optimize module, head to the Image Optimize pipelines configuration (Configuration > Media > Image Optimize pipeline) and add a new optimization pipeline.

Now add the PngQuant and JpegOptim processors. If they have been installed to the server Image Optimize should pick up their location automatically, or you can manually set the location if using a standalone binary.

JpegOptim has some additional quality settings, I’m setting “Progressive” to always and “Quality” to a sweet spot of 60. 70 could also be used as a more conservative target.

The final pipeline looks like the following:

Back to the Image Optimize pipelines configuration page, we can now set the new pipeline as the sitewide default:

And boom! Automated sitewide image compression!

Overriding image compression for individual image styles

If the default compression pipeline is too aggressive (or conservative) for a particular image style, we can override it in the Image Styles configuration (Configuration > Media > Image styles). Edit the image style you’d like to override, and select your alternative pipeline:

Applying compression to existing images

Flushing the image cache will recreate existing images with compression the next time the image is loaded. This can be done with the drush command 

drush image-flush --all

Conclusion

Setting up automated image optimisation is a relatively simple process, with potentially large impacts on site performance. If you have experience with image optimisation, I would love to hear about it in the comments.

Tagged Image Optimisation

MidCamp - Midwest Drupal Camp: We are pleased to announce Chris Rooney will be our keynote speaker at MidCamp 2018

Tue, 01/23/2018 - 19:51
We are pleased to announce Chris Rooney will be our keynote speaker at MidCamp 2018

We are so excited to have Chris as our keynote speaker this year.  He is the President and Founder of Digital Bridge Solutions, a Drupal and Magento Agency here in Chicago that has been a supporter of MidCamp since its inception. 

His presentation at our 2017 event, Whitewashed - Drupal's Diversity Problem And How To Solve It, was a deep, and eye-opening look at diversity in Drupal, and the greater tech world, and how we can go about making it better.

Since then, he has been partnered with Palantir.net on an ambitious inclusion initiative working with students to introduce them to Drupal.  Last year, they brought a group of students from Baltimore to DrupalCon Baltimore.  They have held Drupal training sessions here in Chicago, and are currently working to bring students from Genesys Works and NPower to DrupalCon Nashville.

Chris' presentation will be a collective group journey into sensitive and vulnerable territories, but promises interactivity, a safe space for the exchange of ideas, and perhaps even a little humor.  We hope you join us for it.

Session Submissions close Friday!

MidCamp is looking for folks just like you to speak to our Drupal audience! Experienced speakers are always welcome, but our camp is also a great place to start for first-time speakers.

MidCamp is soliciting sessions geared toward beginner through advanced Drupal users. Know someone who might be a new voice, but has something to say? Please suggest they submit a session.

Find out more at: Buy a Ticket

Tickets and Individual Sponsorships are available on the site for MidCamp 2018.

Click here to get yours!

Schedule of Events
  • Thursday, March 8th, 2018 - Training and Sprints
  • Friday, March 9th, 2018 - Sessions and Social
  • Saturday, March 10th, 2018 - Sessions and Social
  • Sunday, March 11th, 2018 - Sprints
Sponsor MidCamp 2018!

Are you or your company interested in becoming a sponsor for the 2018 event? Sponsoring MidCamp is a great way to promote your company, organization, or product and to show your support for Drupal and the Midwest Drupal community. It also is a great opportunity to connect with potential customers and recruit talent.

Find out more at:

Volunteer for MidCamp 2018

Want to be part of the MidCamp action? We're always looking for volunteers to help out during the event.  We need registration table help, room monitors, help with setting up the venue, and help clearing out.  Sign up at http://bit.ly/midcamp-volunteer-signup and we'll be in touch shortly!

We hope you'll join us at MidCamp 2018!

Dcycle: Caching a Drupal 8 REST resource

Tue, 01/23/2018 - 19:00

Here are a few things I learned about caching for REST resources.

There are probably better ways to accomplish this, but here is what works for me.

Let’s say we have a rest resource that looks something like this in my_module/src/Plugin/rest/resource/MyRestResource.php and we have enabled it using the Rest UI module and given anonymous users permission to view it:

<?php namespace Drupal\my_module\Plugin\rest\resource; use Drupal\rest\ResourceResponse; /** * This is just an example. * * @RestResource( * id = "this_is_just_an_example", * label = @Translation("Display the title of node 1"), * uri_paths = { * "canonical" = "/api/v1/get" * } * ) */ class MyRestResource extends ResourceBase { /** * {@inheritdoc} */ public function get() { $node = node_load(1); $response = new ResourceResponse( [ 'title' => $node->getTitle(), 'time' => time(), ] ); return $response; } }

Now, we can visit http://example.localhost/api/v1/get?_format=json and we will see something like:

{"title":"Some Title","time":1516803204}

Reloading the page, ‘time’ stays the same. That means caching is working; we are not re-computing our Json output each time someone requests it.

How to invalidate the cache when the title changes.

If we edit node 1 and change its title to, say, “Another title”, and reload http://example.localhost/api/v1/get?_format=json, we’ll see the old title. To make sure the cache is invalidated when this happens, we need to provide cacheability metadata to our response telling it when it needs to be recomputed.

Our node, when it’s loaded, contains within it all the caching metadata needed to describe when it should be recomputed: when the title changes, when new filters are added to the text format that’s being used, etc. We can add this information to our ResourceResponse like this:

... $response->addCacheableDependency($node); return $response; ...

When we clear our cache with drush cr and reload our page, we’ll see something like:

{"title":"Another title","time":1516804411}

We know this is still cached because the time stays the same no matter how often we load the page. Try it, it’s fun!

Even more fun is changing the title of node 1 and reloading our Json page, and seeing the title change without clearing the cache:

{"title":"Yet another title","time":1516804481} How to set custom cache invalidation events

Let’s say you want to trigger a cache rebuild for some reason other than those defined by the node itself (title change, etc.).

A real-world example might be events: an “upcoming events” page should only display events which start later than now. If we invalidate the cache every day, then we’ll never show yesterday’s events in our events feed. Here, we need to add our custom cache invalidation event, in this case “rebuild events feed”.

For the purpose of this demo, we won’t actually build an events feed, but we’ll see how cron might be able to trigger cache invalidation.

Let’s add the following code to our response:

... use Drupal\Core\Cache\CacheableMetadata; ... $response->addCacheableDependency($node); $response->addCacheableDependency(CacheableMetadata::createFromRenderArray([ '#cache' => [ 'tags' => [ 'rebuild-events-feed', ], ], ])); return $response; ...

This uses Drupal’s cache tags concept and tells Drupal that when the cache tag ‘rebuild-events-feed’ is invalidated, all cacheable responses which have that cache tag should be invalidated as well. I prefer this to the ‘max-age’ cache tag because it allows us more fine-grained control over when to invalidate our caches.

On cron, we could only invalidate ‘rebuild-events-feed’ if events have passed since our last invalidation of that tag, for example.

For this example, we’ll just invalidate it manually. Clear your cache to begin using the new code (drush cr), then load the page, you will see something like:

{"hello":"Yet another title","time":1516805677}

As always, the time remains the same no matter how many times you reload the page.

Let’s say you are in the midst of a cron run and you have determined that you need to invalidate your cache for response which have the cache tag ‘rebuild-events-feed’, you can run:

\Drupal::service('cache_tags.invalidator')->invalidateTags(['rebuild-events-feed'])

Let’s do it in Drush to see it in action:

drush ev "\Drupal::service('cache_tags.invalidator')->\ invalidateTags(['rebuild-events-feed'])"

We’ve just invalidated our ‘rebuild-events-feed’ tag and, hence, Responses that use it.

The dreaded “leaked metadata” error

This one is beyond my competence level, but I wanted to mention it anyway.

Let’s say you want to output your node’s URL to Json, you might consider computing it using $node->toUrl()->toString(). This will give us “/node/1”.

Let’s add it to our code:

... 'title' => $node->getTitle(), 'url' => $node->toUrl()->toString(), 'time' => time(), ...

This results in a very ugly error which completely breaks your site (at least at the time of this writing): “The controller result claims to be providing relevant cache metadata, but leaked metadata was detected. Please ensure you are not rendering content too early.”.

The problem, it seems, is that Drupal detects that the URL object, like the node we saw earlier, contains its own internal information which tells it when its cache should be invalidated. Converting it to a string prevents the Response from being informed about that information somehow (again, if someone can explain this better than me, please leave a comment), so an exception is thrown.

The ‘toString()’ function has an optional parameter, “$collect_bubbleable_metadata”, which can be used to get not just a string, but also information about its cache should be invalidated. In Drush, this will look like something like:

drush ev 'print_r(node_load(1)->toUrl()->toString(TRUE))' Drupal\Core\GeneratedUrl Object ( [generatedUrl:protected] => /node/1 [cacheContexts:protected] => Array ( ) [cacheTags:protected] => Array ( ) [cacheMaxAge:protected] => -1 [attachments:protected] => Array ( ) )

This changes the return type of toString(), though: toString() no longer returns a string but a GeneratedUrl, so this won’t work:

... 'title' => $node->getTitle(), 'url' => $node->toUrl()->toString(TRUE), 'time' => time(), ...

It gives us the error “Could not normalize object of type Drupal\Core\GeneratedUrl, no supporting normalizer found”.

ohthehugemanatee commented on Drupal.org on how to fix this. Integrating his suggestion, our code now looks like:

... $url = $node->toUrl()->toString(TRUE); $response = new ResourceResponse( [ 'title' => $node->getTitle(), 'url' => $url->getGeneratedUrl(), 'time' => time(), ] ); $response->addCacheableDependency($node); $response->addCacheableDependency($url); ...

This will now work as expected.

With all the fun we’re having, though let’s take this a step further, let’s say we want to export the feed of frontpage items in our Response:

$url = $node->toUrl()->toString(TRUE); $view = \Drupal\views\Views::getView("frontpage"); $view->setDisplay("feed_1"); $view_render_array = $view->render(); $rendered_view = render($view_render_array); $response = new ResourceResponse( [ 'title' => $node->getTitle(), 'url' => $url->getGeneratedUrl(), 'view' => $rendered_view, 'time' => time(), ] ); $response->addCacheableDependency($node); $response->addCacheableDependency($url); $response->addCacheableDependency(CacheableMetadata::createFromRenderArray($view_render_array));

You will not be surpised to see the “leaked metadata was detected” error again… In fact you have come to love and expect this error at this point.

Here is where I’m completely out of my league; according to Crell, “[i]f you [use render() yourself], you’re wrong and you should fix your code “, but I’m not sure how to get a rendered view without using render() myself… I’ve implemented a variation on a comment on Drupal.org by mikejw suggesting using different render context to prevent Drupal from complaining.

$view_render_array = NULL; $rendered_view = NULL; \Drupal::service('renderer')->executeInRenderContext(new RenderContext(), function () use ($view, &$view_render_array, &$rendered_view) { $view_render_array = $view->render(); $rendered_view = render($view_render_array); });

If we check to make sure we have this line in our code:

$response->addCacheableDependency(CacheableMetadata::createFromRenderArray($view_render_array));

we’re telling our Response’s cache to invalidate whenever our view’s cache invaliates. So, for example, if we have several nodes promoted to the front page in our view, we can modify any one of them and our entire Response’s cache will be invalidated and rebuilt.

Resources and further reading

Here are a few things I learned about caching for REST resources.

Pages