You are here

Drupal Planet

Subscribe to Drupal Planet feed
Drupal.org - aggregated feeds in category Planet Drupal
Updated: 5 hours 14 min ago

Code Karate: An Intro to Lando with Drupal 8

Sun, 10/28/2018 - 08:21
Episode Number: 211

Lando is what the cool kids are using for their local development environments these days. In this episode, I give you a quick introduction to Lando and show you how it can be used to create a Drupal 8 site in less than a minute. I also show you how you can integrate Lando into your workflow if you are hosting your websites on Pantheon.

Are you using or have you tried using Lando yet? What are your thoughts?

Check out the Code Karate Patreon page

Tags: DevOpsDrupalDrupal 8Drupal Planet
Categories: Drupal

Bay Area Drupal Camp: BADCamp: Did You Lose Anything?

Sun, 10/28/2018 - 04:17
BADCamp: Did You Lose Anything? Drupal Planet rob.thorne Sun, 10/28/2018 - 00:17

BADCamp is officially over, and we're striking the circus tent as I write this.  A quick last minute note if you forgot or lost anything. By all means, let us know.  Some things sitting in our lost and found at 5:30 PM:

  • A black Office Depot notebook with someone's excellent notes of BADCamp-ish topics.
  • Someone's white and black prescription eye glasses.
  • Two painted sticks that looks like they are used to juggle.
  • A white USB thumb drive (about 4" in length).
  • A Contigo water bottle.

These items, except for the thumb drive, have been left at the 2nd floor reception desk at the MLK Student Union building.  These generally are kept for about a week, so if you want these items back, please contact the front desk at 510.664.7976.

Since data can be very valuable, the MLK ASUC people recommended that we handle the drive differently. If you lost your drive and want to back, please send us a personal message on Twitter, @BADCamp, and we'll get it back to you.

 

 

Categories: Drupal

Drupal Atlanta Medium Publication: DrupalCamp Organizers Unite: Is it Time for Camp Organizers to Become an Official Working Group?

Sat, 10/27/2018 - 23:24
If the community is a top priority then resources for organizing DrupalCamps must also be a top priority.“Together We Create graffiti wall decor” by "My Life Through A Lens" on Unsplash

Community, community and more community. One of the common themes we hear when it comes to evaluating Drupal against other content management systems (CMS), is that the community is made up of over 100,000 highly skilled and passionate developers who contribute code. And in many of these application evaluations, it’s the community, not the software that leads to Drupal winning the bid. We have also heard Dries Buytaert speak about the importance of the community at various DrupalCons and he is quoted on Drupal.org’s getting involved page:

“It’s really the Drupal community and not so much the software that makes the Drupal project what it is. So fostering the Drupal community is actually more important than just managing the code base.” — Dries BuytaertMy First Encounter with the Drupal Community

With this emphasis on community, I tried to think back to how and when I first interacted with the community. Like so many others, my first introduction to Drupal was at a local Meetup. I remember going to this office building in Atlanta and the room was packed with people, plenty of pizza, soda and, of course, laptops. It was a nice relaxed atmosphere where we introduced ourselves and got a chance to know each other a little bit. Then the lights dimmed, the projector turned on and the presentations kicked off, highlighting some new content strategy or a new module that can help layout your content. After that first meetup, I felt energized because until that point, I had never spoken with someone in person about Drupal and it was the first time that I was introduced to Drupal professionals and companies.

Are you interested in attending the first online DrupalCamp Organizers Meeting, on Friday, November 9th at 4:00pm (EST)? RSVP Here.

DrupalCamps Play An Integral Role in Fostering Community

After attending a few meetups, I joined the email list and I received an email announcing DrupalCamp Atlanta was going to be held at Georgia Tech and the call for proposals was now open for session submissions.

2013 DrupalCamp Atlanta photo by Mediacurrent

I purchased a ticket for a mere $30 and added it to my google calendar. On the day of the event, I remember walking in the front door and being blown away by the professionalism of the conference as there were sponsor booths, giveaways, and four concurrent sessions throughout the day. But it wasn’t until I was inside the auditorium during the opening session and saw the 200 or so people pile in that made me realize this Drupal community thing I heard about was for real. Over the next couple of years, I decided that I would attend other camps instead of DrupalCon because the camps were more affordable and less intimidating. My first camp outside of Atlanta was Design4Drupal in Boston, DrupalCamp Charlotte, DrupalCamp Florida and BADCamp were all camps I went to before attending a DrupalCon. All of these camps were top notch but what I really loved is that each camp had their own identity and culture. It’s exactly what I think a community should be and for the very first time, I felt that I was a part of the Drupal community.

Why Establish the DrupalCamp Organizers Council?

As provided in my previous examples, one of the advantages of Drupal comes from the great community and DrupalCamps are an important aspect in fostering this community. Running any event can be challenging, but to pull off a respectable DrupalCamp you have consider so many things such as the website, credit card processing, food, accepting and rejecting sessions, finding a keynote speaker, the afterparty, pre-conference trainings, oh and did I mention the website? You get my drift, it's a lot of work. Many of these tasks just roll off my tongue from past experience so ask yourself;

  • Where can I share my knowledge with other people who organize camps?
  • What if there was some way that all of us DrupalCamp organizers could come together and implement services that make organizing camps easier?
  • How could we provide camp organizers with resources to produce great camps?

During the #AskDries session at DrupalCon Nashville (listen for yourself), Midwest DrupalCamp Organizer Avi Schwab asked Dries the following question;

“... giving the limited funding the Drupal Association has, where should we go in trying to support our smaller local community events?” — Avi Schwab

Dries then responded with:

“That’s a great question. I actually think its a great idea what they (WordCamp) do. Because these camps are a lot of work. ...I think having some sort of central service or lack of a better term, that helps local camp organizers, I think is a fantastic idea, because we could do a lot of things, like have a camp website out of the box, ... we could have all sorts of best practices out of the box .” — Dries Buytaert

DrupalCamp Slack Community was the first time that I was provided a link to a spreadsheet that had the camp history dating back to 2006 and people were adding their target camp dates even if they were just in the planning stages. As a camp organizer I felt connected, I felt empowered to make better decisions and most of all I could just ask everyone, hey, how are you doing this?

Are you interested in attending the first online DrupalCamp Organizers meeting, on Friday, November 9th at 4:00pm (EST)? RSVP Here.

Earlier this year I volunteered for the Drupal Diversity and Inclusion Initiative (DDI) and was inspired when I heard Tara King on the DrupalEasy podcast, talk about how she just created the ddi-contrib channel on the Drupal slack and started hosting meetings. All jazzed up and motivated by that podcast, I reached out to over 20 different camp organizers from various countries and asked them if they would be interested in being on something like this? And if not, would they feel represented if this council existed?

Here are some quotes from Camp Organizers:

“I think a DrupalCamp Organizers Council is a great idea. I would be interested in being a part of such a working group. Just now I’m restraining myself from pouring ideas forth, so I definitely think I’m interested in being a part.”“I am interested in seeing something that gathers resources from the vast experiences of current/past organizers and provides support to camps.”“I definitely would appreciate having such a council and taking part. I’ve now helped organize DrupalCamp four times, and this was the first year we were looped into the slack channels for the organizers.”“I really like the idea — what do we need to do to get this started?”What are the Next Steps?

Based on the positive feedback and the spike in interest from other camp organizers I have decided to take the plunge and establish our first meeting of DrupalCamp Organizers on Friday, November 9th at 4:00pm (EST). This will be an online Zoom video call to encourage people to use their cameras so we can actually get to know one another.

The agenda is simple:

  • Introductions from all callers, and one thing they would like to see from the council.
  • Brainstorm the list of items the council should be advocating for.
  • Identify procedures for electing people to the Council: ways to nominate, eligibility criteria, Drupal event organizer experience required etc.
  • Outline of a quick strategic plan.

If you are interested in attending the zoom online call on Friday, November 9th at 4:00pm (EST), please fill out the RSVP Here. If you are interested in participating in the Council but are Unable to Attend, please fill out this survey here

If you are attending DrupalCamp Atlanta I will be hosting the Zoom call during one of the concurrent sessions so feel free find me.

DrupalCamp Organizers Unite: Is it Time for Camp Organizers to Become an Official Working Group? was originally published in Drupal Atlanta on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categories: Drupal

OpenSense Labs: Revamp Your Large Drupal System. Why and How

Sat, 10/27/2018 - 09:15
Revamp Your Large Drupal System. Why and How Akshita Sat, 10/27/2018 - 10:45

The old adage “united we stand and divided we fall” doesn’t stand true for modern web application architecture. 

When developing an enterprise application, the architecture can reckon among its different features, a monolithic system is deployed with a hope to process the information unruffled without any possible breakups. 

A logical component for corresponding to different functional areas of the application does the monolithic architecture give a smoother ride when the complexity of technology is increasing?


Monolithic is Boring, while Microservices is Full of Possibilities

With digital transformation on a rise and implications on the entire business operations moving from monolithic to microservices is a paradigm shift on how businesses approach software development. 

Understanding the Monolithic System

A monolithic system is a single-tiered software application in which the user interface and data access code are combined into a single program on a single platform. The multiple components run in the same process, on the same system.

A monolithic architecture is where the multiple layers of the application are tightly coupled together.

Usually, there are three components in a system the user interface, the data access layer, and the data store.

The user interface acts as an entry point of the application varying from the website, web service, or various other entry points.

The second layer is the data access layer which is where the layer of the program will wrap a data store. It handles concerns like authenticating with a data store and sanitizing data before it’ is transmitted to the data store.

The third layer is the database or data store which is the most fundamental part of the system and is responsible for storing arbitrary information (data) and retrieving it. 

Together these three components make up an application. In the case of a monolithic application, the multiple layers of the application tightly coupled together. 

Limitations of a Monolithic Drupal Architecture

The major problems which affect a monolithic architecture application both from a business and end users perspective are as follows:

  1. Performance Impairment: One of the biggest reasons why people are shifting to monolithic is the heavy lifting it does which eventually impairs the performance. Continuous heavy cron jobs and on-demand computation on page request by the end user affect the speed

    In monolithic, all the calculations & computations are handled by the PHP code. And it hurts the business.
     
    • It becomes hard to maintain with time as any new deployment affects the entire system rendering wider regression a must.
       
    • The performance of pages and content delivery to users suffer due to on-the-fly heavy computation.

      In most cases, if it is difficult to manage monolith, the system is already or may be sitting on an n-tier layered system, however, they are not independent and asynchronous of each other. This is the malady with large Drupal systems.
       
  2. Bad User Experience:  The poor implementation of the presentation layer of a monolithic Drupal website is another major reason for the bad user experience and the underperformance of applications.

    Some of the bad practices in Drupal theme layer which increase the rendering time of pages can be listed as: 
     
    • Database calls also present in the theme layer instead of being in controllers, adding to the page load time.
    • Use of traditional and non-optimized code in Javascript & CSS.
       
  3. Unscalable Drupal Implementation: Drupal is scalable. But the approach used for feature implementation in Drupal is not scalable with monolithic systems. 
     
    • Improper use of third-party applications in the backend coupled with heavy reliance cron jobs can slow down the system. An advanced approach would be to fetch and render the third party API via Drupal.
       
    • Extremely minimal use of multilayer cache mechanism provided by Drupal 8 is the biggest culprit. 
       
  4. Missing DevOps & Automation: Just like continuous integration, delivery, and deployment, DevOps is a newer phenomenon. With a monolithic application on run, the DevOps process won’t allow proper collaboration with bad codes creeping into the architecture resulting in a bad UX. 
     
    • There is no Continous Integration based build process which executes a set of automated quality checks.
    • Regression in the current site is very hectic and costly affair due to lack of automation in code and functional testing.

What are Microservices?

A microservice is a software development technique where the application (monolithic) is broken into sub-services which are loosely coupled together. Each service is independent of the main system. Together they offer value at par with a monolithic system. 

Microservices-based architectures enable easy continuous delivery and continuous deployment

Providing the Benefits of Layered Architecture of Microservices

Here are the reasons “why” microservices needs to be adopted in lieu of the monolithic Drupal are given below: 

  1. Fault Isolation: Since the services run independently failure of one service wouldn’t affect the overall performance of the system as much as it affects in the monolithic. Other services will continue to work which will limit the scope of code to be refactored for resolution.
     
  2. Independent Deployment: Components built as microservices can be broken down into multiple component services so that each of these services can be deployed and redeployed independently with improvements without compromising the integrity of an application.
     
  3. Easy Maintenance: Microservices require more efforts comparatively to build, however, it is a lot less effort when maintaining in the long term and will ensure better performance of the overall system.
     
  4. Easy Modification: Easy to understand since they represent a small piece of functionality, and easy to modify for the developers. This will also increase the autonomy of individual development teams within an organization, as ideas can be implemented and deployed without having to coordinate with a wider IT delivery function.

Read how Microservices are powering Drupal development

Exploring the MicroServices Architecture

The following diagram explains the ideal layering in the application of a Drupal monolithic system:

 

  • Presentation Layer: This should be a combination of Drupal, and decoupled React apps.
     
  • Aggregation Layer: This should be Drupal being the core of application engaging with microservices and data store layers.
     
  • Business Logic Layer: This should be Node.js based services executing specific tasks.
     
  • Persistence Layer: This should be the primary store of the most important company and produce data. This will engage with Drupal to handle CRUD operations in real time. The will also engage with decoupled React apps on Presentation layer to help them render the data on frontend without any expensive Drupal calls or backend PHP execution.
Steps: How to Plan the Transition and Execution to a Monolithic Architecture

The transition from a present monolithic architecture to the layered microservices architecture can be done in an incremental fashion. Here’s how the plan can be executed:

  1. Identifying the business logic for components like, endorsements, email triggers and all other computation and processes which block the delivery of pages to the end user.
     
  2. Create independent Node.js based services which handle all the logic for the above-identified processes who communicate within themselves via messaging queues and communicate with Drupal via a push-based cronless mechanism.
     
  3. Create a data store. Drupal will push any change in these entities to the cronless mechanism in real time.
     
  4. Use progressively decoupled Drupal for the following purpose limited in its scope.
    For the presentation layer
     
    • For user, role and subscription management system
    • To manage decoupled react based pages and blocks for search which will be powered by independent elastic service.
    • To manage decoupled react based pages/blocks which pull data in a scalable and fast way from the cronless datastore.

      For CMS features like SEO, schema, static pages, CCMS integration etc.
       
  5. Re-Develop the Drupal theme layer to remove all bad practices in current the code base.
Conclusion 

Web applications need to evolve along with the rapid pace of technology and their users. Digital users expect more in terms of better content recommendations, and better ways for accessing websites and data.

As easy as the idea sounds, building microservices is that complex. Streamlining the overall application development lifecycle to boost frequent releases and QA can lead to a far better product. 

This gives a boost when managing a large Drupal system. Contact us at hello@opensenselabs.com to know more about microservices architectures and its value to your organizational setup.

blog banner blog image microservices Drupal microservices Drupal 8 Monolithic monolithic architecture Service Oriented Architecture Blog Type Tech Is it a good read ? On
Categories: Drupal

KatteKrab: Six years and 9 months...

Sat, 10/27/2018 - 06:05
Saturday, October 27, 2018 - 13:05

Six years and 9 months... is a relatively long time. Not as long as some things, longer than others. Relative. As is everything.

But Six years and 9 months is the length of time I've been on the board of the Drupal Association.

I was elected to serve on the board by the community in February 2012, and then nominated to serve for another two terms. That second term expires on 31 October. My original candidate statement makes somewhat nostalgic reading now... and it's now that I wonder, what I achieved. If anything?

But that's the wrong question. There's nothing useful to be gained in trying to answer it.

Instead - I want to reflect on what I learned.

I learned something from everyone at that table. Honestly, I never really lost my sense of imposter syndrome, and I'm freely and gleefully willing to admit that.

Cary Gordon - we shared a passion for DrupalCon. That show grew into the incredible event it is because of seeds you sewed. And your experience running big shows, and supporting small community libraries seemed to be the perfect mix for fueling what Drupal needed.

Steve Purkiss - we were elected together! Your passion for cooperatives, for Drupal, and for getting on with it, and making things happen was infectious! Thank you for standing with me in those weird first few months of being in this weird new place, called the board of the Drupal Association!

Pedro Cambra - I wish I'd heed the lesson you taught me more often. Listen carefully. Speak only when there's something important to say, or to make the case for a perspective that's being missed. But also good humour. And Thank you for helping make the election process better, and helping the DA "own" the mechanics.

Morten - brother. I can't even find the words to say. Your passion for Drupal, for theming, and for our community always inspired me. I miss your energy.

Angie "webchick" Byron - mate! I still can't fathom how you did what you do so effortlessly! Well, I know it's not effortless, but you make it look that way. Your ability to cut through noise, sort things out, get things done, and inspire the Drupal masses to greatness is breathtaking.

Matthew Saunders - you made me appreciate the importance of governance from a different perspective. Thank you for the work you did to strengthen our board processes.

Addison Berry - Sorry Addi - this is a bit shameful, but it was the mezcal, tequila and bourbon lessons that really stuck.

Danese Cooper - I was so grateful for your deep wisdom of Open Source, and the twists and turns of the path it's followed over such a long time. Your eye to pragmatism over zealotry, but steadfast in the important principles.

Shyamala Rajaram - Oh Shyamala! I can't believe we only first met at DrupalCon Mumbai, or perhaps it was only the first time, this time! Thank you for teaching us all how important it is for us to be in India, and embrace our global community.

Ryan Szrama - you stepped onto the board at such a tough moment, but you stepped up into the role of community elected Director, and helped make sense out of what was happening. Sorry not to see you in Drupal Europe.

Rob Gill - Running. I didn't learn this. Sorry.

Tiffany Farriss - You're formidable! You taught me the importance of having principles, and sticking to them. And then using them to build a foundation in the bedrock. You do this with such style, and grace, and good humour. I'm so thankful I've had this time with you.

Jeff Walpole - You made me question my assumptions all the time! You made me laugh, and you gave me excellent bourbon. You always had a way of bringing us back to the real world when we waded too deep into the weeds.

Vesa Palmu - So many things - but the one that still resonates, is we should all celebrate failure. We should create ritual around it, and formalise the lessons failure teaches. We all learn so much more from mistakes, than from successes.

Sameer Verna - For a time, we were the only linux users at the table, and then I defected back to MacOS - I still feel a bit guilty about this, I admit. You championed Free Software at every step - but also, so often, guided us through the strategic mumbo jumbo, to get to the point we needed to.

Steve Francia - "It's not as bad as you all seem to think it is" I don't know why, but I hear this mantra, spoken with your voice, whenever I think of you. Thank you for your Keynote in Nashville, and for everything.

Mike Lamb - I've not yet put into practice the lesson I need to learn from you. To switch off. To really go home, and be home, and switch off the world. I need me some of that, after all of this. Thank you so much for all you've done, but more for your positive, real world perspective. Ta!

Annie - I missed your presence in Germany so much - I feel like I've still got so much to learn from you. You bridged the worlds of digital and marketing, and brought much needed perspective to our thinking. Twas an honour to serve with you.

Audra - With you too, I feel like I was only beginning to get into the groove of the wisdom you're bringing to the table. I hope our paths continue to cross, so I can keep learning!

Baddy Sonja Breidert - A powerful lesson - as volunteers, we have to account for the time, passion and energy we borrow from the rest of our lives, when we give it to Drupal. And Drupal needs to properly recognise it too.

Ingo Rübe - You taught me how to have courage to bring big ideas to the table, and show grace in letting them go.

Michel van Velde - You taught me to interrogate my assumptions, with fun, with good humour, and honest intention of doing good.

George Matthes - You taught me the power of questioning the received wisdom from history. You reminded me of the importance of bringing fresh eyes to every challenge.

Adam Goodman - a simple, but important lesson. That leadership is about caring for people.

Suzanne Dergacheva - newly elected, and about to start your term - I had too little chance to learn from you at the board table, but I already learned that you can teach the whole community kindness by giving them carnations! #DrupalThanks to you too. And power to your arms as you take the oars as a community elected director, and help row us forward!

And to all the staff who've served over the years, your dedication to this organisation and community it serves is incredible. You've all made a difference, together, to all of us. Special mentions for four of you...

Kris - from Munich to Vienna - my constant companion, and my dive bar adventure buddy. Til next time there is cheese...

Holly - Inspiring me to knit! Or, more accurately, to wish I could knit better than I can. To knit with conviction! It's a metaphor for so much, but also very very literally. Also I miss you.

Steph - Your vibrant enthusiasm, and commitment to DrupalCon always inspired me. Your advice on food trucks in Portland nourished me.

Megan - where to start? I'd never finish. Kindness, compassion, steely focus, commercial reality, "operational excellence", and cactus margaritas.

I save my penultimate words for Dries... Thank you for having faith in me. Thank you for creating Drupal, and for sharing it with all of us. Also, thank you sharing many interesting kinds of Gin!

These final words are for Tim - as you take the reins of this crazy sleigh ride into the future - I feel like I'm leaving just before the party is really about to kick off.

Go you good thing.

Good bye, so long, and thanks for all the fish.

The DA does amazing work.
If you rely on Drupal, you rely on them.

Please consider becoming a member, or a supporting partner.

Categories: Drupal

Dcycle: Local development using Docker Compose and HTTPS

Sat, 10/27/2018 - 04:00

This article discusses how to use HTTPS for local development if you use Docker and Docker Compose to develop Drupal 7 or Drupal 8 (indeed any other platform as well) projects. We’re assuming you already have a technique to deploy your code to production (either a build step, rsync, etc.).

In this article we will use the Drupal 8 site starterkit, a Docker Compose-based Drupal application that comes with everything you need to build a Drupal site with a few commands (including local HTTPS); we’ll then discuss how HTTPS works.

If you want to follow along, install and launch the latest version of Docker, make sure ports 80 and 443 are not used locally, and run these commands:

cd ~/Desktop git clone https://github.com/dcycle/starterkit-drupal8site.git cd starterkit-drupal8site ./scripts/https-deploy.sh

The script will prompt you for a domain (for example my-website.local) to access your local development environment. You might also be asked for your password if you want the script to add “127.0.0.1 my-website.local” to your /etc/hosts file. (If you do not want to supply your password, you can add that line to /etc/hosts before running ./scripts/https-deploy.sh).

After a few minutes you will be able to access a Drupal environment on http://my-website.local and https://my-website.local. For https, you will need to explicitly accept the certificate in the browser, because it’s self-signed.

Troubleshooting: if you get a connection error, try using an incongnito (private) window in your browser, or a different browser.

Being a security-conscious developer, you probably read through ./scripts/https-deploy.sh before running it on your computer. If you haven’t, you are encouraged to do so now, as we will be explaining how it works in this article.

You cannot use Let’s Encrypt locally

I often see questions related to setting up Let’s Encrypt for local development. This is not possible because the idea behind Let’s Encrypt is to certify that you own the domain on which you’re working; because no one uniquely owns localhost, or my-project.local, no one can get a certificate for it.

For local development, the Let’s Encrypt folks suggest using trusted, self-signed certificates instead, which is what we are doing in our script.

(If you are interested in setting up Let’s Encrypt for a publicly-available domain, this article is not for you. You might be interested, instead, in Letsencrypt HTTPS for Drupal on Docker and Deploying Letsencrypt with Docker-Compose.)

Make sure your project works without https first

So let’s look at how the ./scripts/https-deploy.sh script we used above works.

Let’s start by making sure our project works without https, then add a https access in a separate container.

In our starterkit project, you can run:

./scripts/deploy.sh

At the end of that scripts, you will see something like:

If all went well you can now access your site at: => http://0.0.0.0:32780/user/reset/...

Docker is serving our application using a random non-secure port, in this case 32780, and mapping it to port 80 on our container.

If you use Docker Compose for local development, you might have several applications running at the same time on different host ports, all mapped to port 80 on their respective container. At the end of this article you should be able to see each of them on port 443, something like:

  • https://my-application-one.local
  • https://my-application-two.local
  • https://my-application-three.local

The secret to all your local projects sharing port 443 is a reverse proxy container which receives requests to port 443, and indeed port 80 also, and acts as a sort of traffic cop to direct traffic the appropriate container.

That is why your individual projects should not directly use ports 80 and/or 443.

Adding an Nginx proxy container in front of your project’s container

An oft-seen approach to making your project available locally via HTTPS is to fiddle with your Dockerfile, installing openssl, setting up the certificate there; and rebuilding your container. This can work, but I would argue that it has significant drawbacks:

  • If you have several projects running on https port 443 locally, you could only develop one at a time because you only have one 443 port on your host machine.
  • You would need to maintain the SSL portion of your code for each of your projects.
  • It would go against the principle of separation of concerns which makes containers so robust.
  • You would be reinventing the wheel: there’s already a well-maintained Nginx proxy image which does exactly what you want.
  • Your job as a software developer is not to set up SSL.
  • If you decide to deploy your project to production Kubernetes cluster, it would longer makes sense for each of your Apache containers to support SSL.

For all those reasons, we will loosely couple our project with the act of serving it via HTTPS; we’ll leave our project alone and place an Nginx proxy in front of it to deal with the SSL/HTTPS portion of our local deployment.

Local https for one or more running projects

In this example we set up only one starterkit application, but real-world developers often need HTTPS with more than one application. Because you only have one local 443 port for HTTPS, We need a way to differentiate between our running applications.

Our approach will be for each of our projects to have an assigned local domain. This is why the https script we used in our example asked you to choose a domain like starterkit-drupal8.local.

Our script stored this information in the .env file at the root or your project, and also made sure it resolves to localhost in your /etc/hosts file.

Launching the Nginx reverse proxy

To me the terms “proxy” and “reverse proxy” are not intuitive. I’ll try to demystify them here.

The term “proxy” means something which represents something else; that term is already widely used to denote a web client being hidden from the user. So, a server might deliver content to a proxy which then delivers it to the end user, thereby hiding the end user from the server.

In our case we want to do the reverse: the client (you) is not placing a proxy in front of it; rather the application is placing a proxy in front of it, thereby hiding the project server from the browser: the browser communicates with Nginx, and Nginx communicates with your project.

Hence, “reverse proxy”.

Our reverse proxy uses a widely used and well-maintained GitHub project. The script you used earlier in this article launched a container based on that image.

Linking the reverse proxy to our application

With our starterkit application running on a random port (something like 32780) and our nginx proxy application running on ports 80 and 443, how are the two linked?

We now need to tell our Nginx proxy that when it receives a request for domain starterkit-drupal8.local, it should display our starterkit application.

There are a few steps to this, most handled by our script:

  • Your project’s docker-compose.yml file should look something like this: it needs to contain the environment variable VIRTUAL_HOST=${VIRTUAL_HOST}. This takes the VIRTUAL_HOST environment variable that our script added to the ./.env file, and makes it available inside the container.
  • Our script assumes that your project contains a ./scripts/deploy.sh file, which deploys our project to a random, non-secure port.
  • Our script assumes that only the Nginx Proxy container is published on ports 80 and 443, so if these ports are already used by something else, you’ll get an error.
  • Our script appends VIRTUAL_HOST=starterkit-drupal8.local to the ./.env file.
  • Our script attempts to add 127.0.0.1 starterkit-drupal8.local to our /etc/hosts file, which might require a password.
  • Our script finds the network your project is running on locally (all Docker-compose projects run on their own local named network), and gives the reverse proxy accesss to it.
That’s it!

You should now be able to access your project locally with https://starterkit-drupal8.local (port 443) and http://starterkit-drupal8.local (port 80), and apply this technique to any number of Docker Compose projects.

Troubleshooting: if you get a connection error, try using an incongnito (private) window in your browser, or a different browser; also note that you need to explicitly trust the certificate.

You can copy paste the script to your Docker Compose project at ./scripts/https-deploy.sh if:

  • Your ./docker-compose.yml contains the environment variable VIRTUAL_HOST=${VIRTUAL_HOST};
  • You have a script, ./scripts/deploy.sh, which launches a non-secure version of your application on a random port.

Happy coding!

This article discusses how to use HTTPS for local development if you use Docker and Docker Compose to develop Drupal 7 or Drupal 8 (indeed any other platform as well) projects. We’re assuming you already have a technique to deploy your code to production (either a build step, rsync, etc.).

Categories: Drupal

Palantir: Resources for the Future: The VALUABLES Consortium

Fri, 10/26/2018 - 21:05
Resources for the Future: The VALUABLES Consortium brandt Fri, 10/26/2018 - 12:05

Utilizing an existing Drupal platform to measure the socio-economic benefits of satellite data.

rff.org/valuables Measuring the socio-economic benefits of satellite data On

Since the first satellite was launched into space in 1957, satellites have been sent into orbit for a wide array of purposes: they’re used to make star maps, relay television and radio signals, provide navigation, and gather information about Earth. But have you ever wondered why satellite data is important to society?

Like all types of information, the information that satellites gather about our planet is valuable because it can help us make decisions that lead to better outcomes for people and the environment. At the same time, it is challenging to measure this value in terms that are socioeconomically meaningful, like lives saved, increases in revenue, or acres of forest protected. In 2016, Resources for the Future (RFF) created the Consortium for the Valuation of Applications Benefits Linked with Earth Science (VALUABLES) to help address this challenge.

The VALUABLES Consortium

RFF is an independent, nonprofit research institution working to improve environmental, energy, and natural resource decisions through impartial economic research and policy engagement. The VALUABLES Consortium is a cooperative agreement between RFF and the National Aeronautics and Space Administration (NASA) that is building a community of Earth and social scientists committed to quantifying the socioeconomic benefits of Earth observations.

The consortium’s work focuses on two types of activities:

  • Conducting case studies, known as impact assessments, that measure the socioeconomic benefits that satellite information provides when people use it to make decisions
  • Developing educational materials and activities designed to support the Earth science community in quantifying the societal value of its work.
Creating a Place on the Web to Share Resources

To amplify the consortium’s work, RFF wanted to create a place on the web where the VALUABLES Consortium could share the results of its impact assessments and provide Earth scientists with access to resources about quantifying the societal value of their work.

Palantir's Approach

Palantir originally partnered with RFF back in 2015 when we helped them redesign their website to showcase their unique content in a way that accurately reflected their core values. We built them a solid Drupal 7 codebase that they could extend and adapt to changing business needs over time.

For the VALUABLES project, Palantir determined we could easily leverage that carefully built platform to quickly create a new set of templates which would align the entire web presence while addressing new needs.

To create the VALUABLES section of the site, Palantir built on RFF’s existing Drupal 7 theme with the creation of some new VALUABLES-specific components.

These included:

Building on the existing theme and implementing subtle design changes to existing components allowed us to give the VALUABLES sub-section of the RFF site a unique (yet cohesive) look, without needing to build everything from scratch.

Extending the existing platform also ensured the VALUABLES section would have a layout consistent with the rest of the RFF site.

What Does Future Success Look Like?

RFF will be measuring the success of the consortium’s website by looking at factors like how the site’s audience grows over time. They hope that the VALUABLES community will use the platform to learn more about the consortium’s activities, access information about the case studies the consortium is completing, and share the tools it is building.

RFF takes an economic lens toward environmental and energy-based issues, highlighting how decisions affect both our environment and our economy. Historically, RFF has played an important role in environmental economics by developing the methods and studies that help policymakers understand the value of things that are hard to value, like clean air and clean water. Now, a few decades later, RFF is working with NASA on this initiative to value information. Work to quantify the societal benefits of Earth observations is important for a number of reasons. For example, it can help demonstrate return on investments in satellites. It can also provide Earth scientists with an effective way to communicate the value of satellite remote sensing work to policymakers and the public.

This project has been nominated as a “Working Toward a Better Tomorrow” category finalist in the 2018 Acquia Engage Awards.

Categories: Drupal

AddWeb Solution: Creating Customized Cloning Module for Drupal 8 Website

Fri, 10/26/2018 - 20:04

Cloning is a concept that runs in almost every industry that exists, for ages. And the world of website development is no different from others. Multiple tools are available to clone a website, be it a command line or GUI. Being in the business of coding for years now, we at AddWeb have cloned a number of websites/website pages to fulfill the requirement of our client.

 

Over a span of 6+ years of our existence in the industry as a leading IT company, we have worked with a host of international clients. Similarly, this time too a client, whose name we can not disclose due to legal & ethical reasons, came up with a requirement of cloning multiple pages for their Drupal-based website. And we, buckled-up to deliver our expertise and stand true to the client’s requirement.

 

The Client’s Requirement:
Simple cloning is not an extraordinary task since the modules for the same are easily available from the community. But this one required us to clone multiple pages at a time, where the original page is not affected. Also, the pages had to be cloned in such a manner that the components of the same are thoroughly included. Scrutinizing the nature of the requirement, we realized that this type of cloning required us to either custom-create a module or make alterations to the existing module available for cloning. The website was in Drupal 8 and we knew, it’s about time to show some more love for our most loved tech-stack.
 

The Process of Cloning:
Drupal 8 has always been our favorite sphere to work on. So, all excited and geared up with the tool of our experience over the same we searched out the available module for cloning from the community site of Drupal - Drupal.org. The name of this module is ‘Entity Clone Module’. 

, ,

The Emergence of Challenge:
But as they say, “Calm waters does not make a good sailor”. The water was not calm for us either. Because as we said the cloning module that we found from the community came with a limitation, which was that only one page can be cloned at a time. So, now was the time to bring our expert knowledge of Drupal to use and create a custom module that fulfills the requirement of the client.
 

Overcoming the Challenge:
We have had made a couple of modules in Drupal earlier and hence, we knew we would be able to create a fine custom module for cloning. And we did it! Yes, we created a custom module which came with multiple page templates, group-wise. One just needs to select the required page-template, submit the form to clone it and it’s done. Every single selected page gets cloned along with the components. This process turned out to be immensely useful for the editor. Because it saved both the admin’s time as well as energy to clone the pages. This process was otherwise quite tedious and time-taking since the admin had to clone one page at a time; whereas here just one single click and multiple pages are cloned together. And we’ll definitely share the credit of creating this custom module for cloning with the ‘Entity Clone Module’; since we used their script and made some alterations and addition to it in order to make the multiple-page cloning feature possible.

 

The Final Word:
We, at AddWeb Solution Pvt. Ltd., believe the ultimate achievement of any work that we do lies in the satisfaction that the client feels on delivering the final product. And we don’t whether we’re just lucky or too good with our work that like others, this client too responded to us with the appreciation - not just for the quality of the work that we deliver but also for the ‘Artful Agile’ process that we choose to follow for our work!

 

Categories: Drupal

Aten Design Group: Decoupled Drupal + Gatsby: Automating Deployment

Fri, 10/26/2018 - 19:15

To get started with decoupling Drupal with Gatsby, check out our previous screencasts here.

In this screencast, I'll be showing you how to automate content deployment. So when you update the content on your Drupal site, it will automatically rebuild/update your Gatsby site on Netlify.

Download a Transcription of this Screencast

Download Transcription

Categories: Drupal

OpenSense Labs: How to implement Continuous Deployment with Drupal

Fri, 10/26/2018 - 17:16
How to implement Continuous Deployment with Drupal Shankar Fri, 10/26/2018 - 18:46

The Guardian, one of the most trusted news media, took a different approach for their membership and subscriptions apps. Rather than emphasising on lengthy validation in staging environments, The Guardian’s Continuous Deployment pipeline places greater focus on ensuring that the new builds are really working in production. Their objective was to let the developers know that their code has run successfully in the real world instead of just observing green test cases in a sanitised and potentially unrepresentative environment.


Thus, The Guardian reduced the amount of testing run pre-deployment and extended the deployment pipeline constituting feedback on tests run against the production site. Such is the significance of utilising a lightweight Continuous Deployment pipeline which has helped a large organisation like The Guardian to focus on production validation instead of a large suite of acceptance tests. Such benefits can be witnessed in the Drupal-based projects as well where Continuous Deployment can allow us to iterate on Drupal web applications at speed.

Read more on the implementation of Continuous Integration and Continuous Delivery with Drupal

A Brief Timeline of Continuous Deployment

Agile Aliiance has stated that the origins of Continuous Deployment can be traced in the early 2000s. In 2002, Kent Beck, creator of Extreme Programming, has mentioned Continuous Deployment in the early discussions (unpublished) of applying Lean ideas to software where undeployed features are seen as inventory. However, it took multiple years for it to be refined and codified.

Later, in the proceedings of Agile 2006 Conference, the first article describing the core of Continuous Deployment - The Deployment Production Line - came into the limelight. Published by Jez Humble, Chris Read and Dan North, it was a codification of the practices of numerous ThoughtWorks UK teams.

By 2009, the practice of Continuous Deployment became well established as can be seen through the article called Continuous Deployment at IMVU by Timothy Fitz. Not only it is beneficial in Agile processes, but its great features can be extracted for methodologies such as a Lean startup or DevOps.

Continuous Deployment in focus Source: Atlassian

While Continuous Integration refers to the process of automatically building and testing your software on a regular basis, Continuous Delivery is the logical next step which ensures that your code is always in a release-ready state. The ultimate culmination of this process is the Continuous Deployment.

In Continuous Deployment, every alteration that passes all stages of your production pipeline is released to the customers

In Continuous Deployment, every alteration that passes all stages of your production pipeline is released to the customers with no human intervention and only a failed test will deter a new alteration to be deployed to production. It is a spectacular way to aggrandise the feedback loop with your customers and take pressure off the team as is takes away the so-called ‘release day’ from the equation. It allows the developers to emphasise on creating software and they can see their work going live minutes after they have put in all their efforts on it.

Why Should you Consider Continuous Deployment?

Continuous Deployment benefits both the internal team who are implementing it and the stakeholders in your company.

For internal team
  • Instead of performing a weekly or a monthly release, moving to feature-driven releases enables faster and finer-grained upgrades and helps in debugging and regression detection by only altering one thing at a time.
  • By automating every step of the process, you make it self-documenting and repeatable.
  • By making the deployment to the server fully automated, a repeatable deployment process can be created.
  • By automating the release and deployment process, you can constantly release the ongoing work to the staging and QA servers thereby giving visibility fo the state of development.
Moving to feature-driven releases enables faster and finer-grained upgrades For stakeholders in the company
  • Instead of waiting for a fixed upgrade window, you can release features when they are ready thereby getting them to the customer faster. As you are constantly releasing to a staging server while developing them, internal customers can see the alterations and take part in the development process.
  • Managers will see the result of work faster and progress will be visible when you release more often
  • If a developer needs a few more hours to make sure that the feature is in perfect working condition, then the feature will go out a few hours later and not when the next release window opens.
  • Sysadmins will not have to perform the releases themselves. Small, discrete feature releases will enable easier detection of the alterations that have affected the system adversely. 
Continuous Deployment Tools


Unit tests and functional tests put the code into as many execution scenarios as possible for predicting its behaviour in production. Unit testing frameworks consist of NUnit, TestNG and RSpec among others.
 
IT automation and configuration management tools like Poppet and Ansible manage code deployment and hosting resource configuration. Tools like Cucumber and Calabash can help in setting up integration and acceptance tools.
 
Monitoring tools like AppDynamics and Splunk can help in tracking and reporting any alterations in application or infrastructure. Performance due to the new code. Management tools like PagerDuty can trigger IT incident response. Monitoring and incident response for Continuous Deployment setups should be to real-time for shortening time to recovery when there are hassles with the code.
 
Rollback capabilities are essential in the deployment toolset to detect any unexpected or undesired effects of new code in production and mitigate them faster. Moreover, canary deployment and sharding, blue/green deployment, feature flags or toggles and other deployment controls can be useful for organisations looking to safeguard against user disruption from Continuous Deployment.
 
Some applications can deploy in containers such as Docker and Kubernetes for isolating updates from the underlying infrastructure.

Continuous Deployment with Drupal


A digital agency worked with Drupal 8, Composer, Github, Pantheon and CircleCI around Continuous Integration and Deployment. The project involved moving from internal hosting to the cloud (in this case, Pantheon), moving the main sites from Drupal 7 to Drupal 8 and implementing a new design.

To the cloud

Pantheon was chosen as the cloud host for new Drupal sites. Initially, it was chosen for features like ‘Cutom Upstreams’, one-click core updates, simple deployments between development, Test, and Live environments, Multidevs, and the fact that each is a Git repo a heart. Terminus (Pantheon CLI tool) was heavily used and appreciated.

Migration to Drupal 8

It focussed on two main umbrella sites and one news site to serve both umbrella sites. It did a content refresh which showed that only content that needs to be migrated are the news articles. The configuration management of Drupal 8 was found to be nicer than the Drupal 7.

Custom Design

As the Drupal is not the only web platform they were using, instead of building a Drupal theme, they built a platform-agnostic project with a new look and feel. It was based on the Zurb foundation and was just HTML, CSS, and JavaScript.
 
Grunt was used as the build tool. So when they have a new release, they would just commit and push to Github. That triggers a CircleCI workflow which tags a new release and publishes the release artefact as an npm package to Artifactory. From there, npm package can be pulled into any project including Drupal.
 
It should be noted that the published package includes only the CSS, JS, libraries and other assets. After the publishing, a static site is created with the package and corresponding HTML templates on a cloud host as a reference implementation.

Deployment Process

They had an ‘upstream’ repo on Github named umbrella-upstream which is a composer-based Drupal 8 project with a custom install profile comprising of custom modules, package.json, and deploy scripts. Each of the sites (umbrella-site X, umbrella-site Y, etc.) was also in a Github repo as composer-based Drupal 8 project and had umbrella-upstream configured as a remote.
 
When they push an alteration to the upstream repo, a set of CircleCI workflows gets started that runs some Codeception acceptance tests and the alterations get merged from umbrella-upstream down to each umbrella-site X/Y repo.
 
Then, another CircleCI workflow builds, tests and pushes a full Drupal umbrella-site X/Y install to the corresponding Pantheon site X/Y all the way up right to the test environment. Quicksilver hooks were used to send any alterations Pantheon back to the site repos.

Entire Workflow involved:

  • Code alterations and Git commit in custom design repo
  • Npm update custom-design -save-dev, grunt and Git commit in umbrella-upstream repo

Finally, the alterations show up in the Test environment of each site on Pantheon.

Conclusion

It is of paramount importance that you keep iterating and deploy software at speed and with efficacy. Continuous Deployment is a great strategy for software releases wherein code commit that passes automated testing phase is automatically released into the production environment.
 
Drupal deployment can benefit to a great extent through the incorporation of Continuous Deployment in the project development process. The biggest advantage of doing so is that it makes the alterations visible to the application’s users.
 
Opensense Labs is committed towards the provision of wonderful digital experience to the organisations with its suite of services.
 
To make your next Drupal-based project supremely efficacious through the implementation of Continuous Deployment, ping us at hello@opensenselabs.com

blog banner blog image Continuous deployment Drupal Continuous Deployment Continuous Integration continuous delivery Blog Type Articles Is it a good read ? On
Categories: Drupal

Gizra.com: 4 Million Euros in 5 Days, with Elm and Drupal

Fri, 10/26/2018 - 09:00

After almost one year, and that $1.6M for a single item we had a couple more (big) sales that are worth talking about.

If you expect this to be a pat on the shoulder kind of post, where I’m talking about some hyped tech stack, sprinkled with the words “you can just”, and “simply” - while describing some incredible success, I can assure you it is not that.

It is, however, also not a “we have completely failed” self reflection.

Like every good story, it’s somewhere in the middle.

The Exciting World of Stamps

Many years ago, when Brice and me founded Gizra, we decided “No gambling, and no porn.” It’s our “Do no evil” equivalent. Along all the life of Gizra we always had at least one entrepreneurial project going on in different fields and areas. On all of them we just lost money. I’m not saying it necessarily as a bad thing - one needs to know how to lose money; but obviously it would be hard to tell it as a good thing.

Even in the beginning days, we knew something that we know also now - as a service provider there’s a very clear glass ceiling. Take the number of developers you have, multiple by your hourly rate, number of working hours, and that’s your your optimal revenue. Reduce at least 15% (and probably way more, unless you are very minded about efficiency) and now you have a realistic revenue. Building websites is a tough market, and it’s never getting easy - but it pays the salaries and all things considered, I think it’s worth it.

While we are blessed with some really fancy clients, and we are already established in the international Drupal & Elm market, we wanted to have a product. I tend to joke that I already know all the pain points of being a service provider, so it’s about time I know also the ones of having a product.

Five years ago Yoav entered our door with the idea of CircuitAuction - a system for auction houses (the “going once… going twice…” type). Yoav was born to a family of stamps collectors and was also a Drupaler. He knew building the system he dreamed of was above his pay grade, so he contacted us.

Boy, did we suck. Not just Gizra. Also Yoav. There was a really good division between us in terms of suckiness. If you think I’m harsh with myself, picture yourself five years ago, and tell yourself what you think of past you.

I won’t go much into the history. Suffice to say that my believe that only on the third rewrite of any application do you start getting it right, was finally put to the test (and proved itself right). Also, important to note that at some point we turned from service provides to partners and now CircuitAuction is owned by Brice, Yoav, and myself. This part will be important when we reach the “Choose your partners right” section.

So the first official sale along with the third version of CircuitAuction happened in Germany at March 2017. I’ve never had a more stressful time at work than the weeks before, and along the sale. I was completely exhausted. If you ever heard me preaching about work life balance, you would probably understand how it took me by surprise the fact that I’ve worked for 16 hours a day, weekdays and weekends, for six weeks straight.

I don’t regret doing so. Or more precisely, I would probably really regret it if we would have failed. But we were equipped with a lot of passion to nail it. But still, when I think of those pre-sale weeks I cringe.

Stamp Collections & Auction Houses 101

Some people, very few (and unfortunately for you the reader, you are probably not one of them) are very, very (very) rich. They are rich to the point that for them, buying a stamp in thousands or hundreds of euros is just not a big deal.

Some people, very few (and unfortunately for you the reader, you are probably not one of them), have stamp collections or just a couple of valuable stamps that they want to sell.

This is where the auction house comes in. They are not the ones that own the stamps. No, an auction house’s reputation is determined by the the two rolodexes they have: the one with the collectors, and the one with the sellers. Privacy and confidentiality, along with honesty, are obviously among the most important traits for the auction house.

So, you might think “They just need to sell a few stamps. How hard can that be?”

Well, there are probably harder things in life, but our path led us here, so this is what we’re dealing with. The thing is that along those five days of a “live sale” there are about 7,000 items (stamps, collections, postcards etc’) that beforehand need to be categorized, arranged, curated and pass an extensive and rigorous workflow (if you would buy these 4 stamp for 74,000 euro, you’d expect it to be carefully handled, right?).

Screenshot of the live auction webapp, built with Elm. A stamp is being sold in real time for lots of Euros!

Now mind you that handling stamps is quite different from coins and they are both completely different from paintings. For the unprofessional eye those are “just” auctions, but when dealing with such expensive items, and such specific niches, each one has different needs and jargon.

We Went Too Far. Maybe.
  • Big stamps sales are a few million euros; but those of coins are of hundreds.
  • The logic for stamp auctions is usually more complex than that of coins.
  • Heinrich Koehler, our current biggest client and one of the most prestige stamps auction houses in the world has an even crazier logic. Emphasis on the crazier. Being such a central auction house, every case that would normally be considered as edge case, manifests itself on every sale.

So, we went with a “poor” vertical (may we all be as poor as this vertical), and with a very complex system. There are a few reasons for that, although only time would tell if was a good bet:

Yoav, our partner, has a lot of personal connections in this market - he literally played as kid or had weekend barbecues with many of the existing players. The auction houses by nature are relying heavily on those relations, so having a foothold in this niche market is an incredible advantage

Grabbing the big player was really hard. Heinrich Koehler requires a lot of care, and enormous amount of development. But once we got there, we have one hell of a bragging right.

There’s also an obvious one that is often not mentioned - we didn’t know better. Only until very late in the process, we never asked those questions, as we were too distracted with chasing the opportunities that were popping.

But the above derails from probably the biggest mistake we did along the years: not building the right thing.

If you are in the tech industry, I would bet you have seen this in one form or another. The manifestation of it is the dreaded “In the end we don’t need it” sentence floating in the air, and a team of developers and project managers face-palming. Developers are cynical for a reason. They have seen this one too many times.

I think that developing something that is only 90% correct is much worse than not developing it at all. When you don’t have a car, you don’t go out of town for a trip. When you do, but it constantly breaks or doesn’t really get you to the point you wanted, you also don’t get to hike, only you are super frustrated at the expense of the misbehaving car, and at the fact that it’s, well, not working.

We were able to prevent that from happening to many of our clients, but fell to the same trap. We assumed some features were needed. We thought we should build it in a certain way. But we didn’t know. We didn’t always have a real use case, and we ended rewriting parts over and over again.

The biggest change, and what has put us on the right path, was when we stopped developing on assumptions, and moved one line of code at a time, only when it was backed up with real use cases. Sounds trivial? It is. Unfortunately, also doing the opposite - “develop by gut feeling” is trivial. I find that it requires more discipline staying on the right path.

Luckily, at some point we have found a superb liaison.

The Liaison, The Partners, and the Art of War

Tobias (Tobi) Huylmans, our liaison, is a person that really influenced for the better and helped shape the product to what it is.

He’s a key person in Heinrich Koehler dealing with just about any aspect of their business. From getting the stamps, describing them, expediting them (i.e. being the professional that gives the seal of approval that the item is genuine), teaching the team how to work with technology, getting every possible complaint from every person nearby, opening issues for us on GitHub, getting filled with pure rage when a feature is not working, getting excited when a feature is working, being the auctioneer at the sale, helping accounting with the bookkeeping, and last, but not least, being a husband and a father.

There are quite a few significant things I’ve learned working with him. The most important is - have someone close the team, that really knows what they are talking about, when it comes to the problem area. That is, I don’t think that his solutions are always the best ones, but he definitely understands the underlying problem.

It’s probably ridiculous how obvious this above resolution is, and yet I suspect we are not the only ones in the world who didn’t grasp it fully. If I’d have to make it an actionable item for any new entrepreneur I’d call it “Don’t start anything unless you have an expert in the field, that is in a daily contact with you.”

Every field has a certain amount of logic, that only when you immerse yourself in it do you really get it. For me personally it took almost four months of daily work to “get it”, when it came to how bids should be allowed to be placed. Your brain might tell you it’s a click of a button, but my code with 40+ different exceptions that can be thrown along a single request is saying differently.

We wouldn’t have gotten there without Tobi. It’s obvious that I have enormous respect for him, but at the same time he can drive me crazy.

I need a calm atmosphere in order to be productive. However, Tobi is all over the place. I can’t blame him - you’ve just read how many things he’s dealing with. But at times of pressure he’s sometimes expecting FOR THINGS TO BE FIXED IMMEDIATELY!!!
You probably get my point. I’m appreciative for all his input, but I need it to be filtered. Luckily me and my partners’ personalities are on slightly different spectrums that are (usually) complimenting each other:

I can code well in short sprints, where the scope is limited. I’m slightly obsessed with clean code and automatic testing, but I can’t hold it for super long periods.

Brice is hardly ever getting stressed and can manage huge scopes. He’s more of a “if it works don’t fix it”, while I have a tendency to want to polish existing (ugly) code when I come across it. His “Pragmatic” level is set all the way to maximum. So while I don’t always agree with his technical decisions, one way or another, the end result is a beast of a system that allows managing huge collections of items, with their history and along with their accounting, invoicing and much more. In short, he delivers.

Yoav’s knows the ins and outs of the auction field. On top of that his patience is only slightly higher than a Hindu cow. One can imagine the amount of pressure he has undergone in those first sales when things were not as smooth as they should have been. I surely would have cracked.

This mix of personalities isn’t something we’re hiding. In fact it’s what allows us to manage this battle field called auctions sales. Sometimes the client needs a good old tender loving care, with a “Yes, we will have it”; sometimes they need to hear a “No, we will not get to it on time” with a calm voice; and sometimes they need to see me about to start foaming from the mouth when I feel our efforts are not appreciated.

Our Stack

Our Elm & Drupal stack is probably quite unique. After almost 4 years with this stack I’m feeling very strongly about the following universal truth:

Elm is absolutely awesome. We would not have had such a stable product with JS in such a short time. I’m not saying others could not do it in JS. I’m saying we couldn’t, and I wouldn’t have wanted to bother to try it. In a way I feel that I have reached a point where I see people writing apps in JS, and can’t understand why they are interacting with that language directly. If there is one technical tip I’d give someone looking into front end and feeling burned by JS is “try Elm.”

Drupal is also really great. But it’s built on a language without a proper type system and a friendly compiler. On any other day I’d tell you how now days I think that’s a really a bad idea. However, I won’t do it today, because we have one big advantage by using Drupal - we master it. This cannot be underestimated: even though we have re-written CircuitAuction “just” three times, in fact we have built with Drupal many (many) other websites and web applications and learned almost everything that can be thought. I am personally very eager to getting Haskell officially into our stack, but business oriented me doesn’t allow it yet. I’m not saying Haskell isn’t right. I’m just saying that for us it’s still hard to justify it over Drupal. Mastery takes many years, and is worth a lot of hours and dollars. I still choose to believe that we’ll get there.

On Investments, Cash Flow, and Marketing

We have a lot of more work ahead of us. I’m not saying it in that extra cheerful and motivated tone one hears in cheesy movies about startups. No, I’m seeing it in the “Shit! We have a lot of more work ahead of us.” tone.
Ok, maybe a bit cheerful, and maybe a bit motivated - but I’m trying to make a point here.

For the first time in our Gizra life we have received a small investment ($0.5M). It’s worth noting that we sought a small investment. One of the advantages of building a product only after we’ve established a steady income is that we can invest some of our revenues in our entrepreneurial projects. But still, we are in our early days, and there is just about only a single way to measure if we’ll be successful or not: will we have many clients.

We now have some money to buy us a few months without worrying about cash flow, but we know the only way to keep telling the CircuitAuction story is by selling. Marketing was done before, but now we’re really stepping on it, in Germany, UK, US and Israel. I’m personally quite optimistic, and I’m really looking forward to the upcoming months, to see for real if our team is as good as I think and hope, and be able to simply say “We deliver.”

Continue reading…

Categories: Drupal

Code Karate: Code Karate is Back and Ready for Drupal 8

Fri, 10/26/2018 - 03:33
Episode Number: 210

I know it's been awhile since I last posted, but I think we are all ready for some Drupal 8 videos! Let me know in the comments what you want to see posted in the future.

Check ou the Code Karate Patreon page

Tags: DrupalDrupal 8Drupal PlanetGeneral Discussion
Categories: Drupal

OpenSense Labs: Anatomy of Continuous Delivery with Drupal

Thu, 10/25/2018 - 20:21
Anatomy of Continuous Delivery with Drupal Shankar Thu, 10/25/2018 - 21:51

Audi’s implementation of Continuous Delivery into its marketing has had an astronomical impact on its competitive advantage. For instance, when Audi released its new A3 model along with all other new releases, it wanted to communicate the new features, convey the options, and assist people in understanding the differences among body types, engines and things like that. Continuous Delivery turned out to be the definitive solution. It helped in refining the messaging and optimising it on the fly to make sure that the people are understanding what the automaker is trying to communicate.


Continuous Delivery (CD) is a quintessential methodology which makes the management and delivery of projects in big enterprises like Audi more efficient. When it comes to Drupal-based projects, Continuous Delivery can bring efficacy to the governance of projects. It can lead to better team collaboration and on-demand software delivery.

Read more on Continous Integration with Drupal

Building and Deploying using Continuous Delivery Source: Atlassian

For many organisations, shipping takes a colossal amount of effort. If your team is still living with manual testing preparing for releases and manual or semi-scripted deploys for carrying out releases, it can be toilsome. No wonder software development is moving towards continuity. In the continuous paradigm, quality products are released in a frequent and predictable manner to the customers thereby reducing the risk factor.

In 2010, Jez Humble and David Farley released a book called Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation.

In this book, they argued that “software that’s been successfully integrated into a mainline code stream still isn’t a software that’s out in production doing its job”. That is, no matter how fast you assemble your product, it does not really matter if it is just going to be stored in a warehouse for months.

Continuous Delivery is the software development practice for building software in such a way that it can be released to production at any time.

Continuous Delivery refers to the software development practice for building software in such a way that it can be released to production at any time. So, if your software is deployable throughout its lifecycle, you are doing Continuous Delivery. In this, the team gives more priority to keeping the software deployable than working on new features. This ensures that anybody can get quick and automated feedback on the production readiness of their systems whenever alterations are done. 

Thus, Continuous Delivery enables push-button deployments of any software version to any environment on demand.

How does Continuously Delivery work? Source: Amazon Web Services

For achieving Continuous Delivery, you need to continuously integrate the software built by the development team, build executables and run automated tests on those executables for detecting problems.

Then, the executables are required to be pushed into increasingly production-like environments to make sure that the software is in working condition when pushed to production. This is done by implementing a deployment pipeline that provides visibility into the production readiness of your applications. It gives feedback on every alteration to your system and allows team members to perform self-service deployments into their environments.

Continuous Delivery requires a close, collaborative working relationship between the team members which is often referred to as DevOps Culture. It also needs extensive automation of all possible parts of the delivery process using a deployment pipeline.

Continuous Delivery vs Continuous Integration vs Continuous Deployment

Continuous Delivery is often confused with Continuous Deployment.

In Continuous Deployment, every alteration goes through the pipeline and are automatically pushed into production which results in many production deployments every day.

In Continuous Delivery, you are able to do frequent deployment and if the certain businesses demand a slower rate of deployment, you may choose not to perform the frequent deployment. So, for performing Continuous Deployment, you must be doing Continuous Delivery.

Continuous Delivery builds on Continuous Integration and deals with the final stages that are required for production deployment.

So, where does Continuous Integration come into the picture? It allows you to integrate, build, and test code within the development environment. Continuous Delivery builds on this and deals with the final stages that are required for production deployment.

Benefits of Continuous Delivery

The major benefits of Continuous Delivery are:

  • Minimised Risk: As you are deploying smaller alterations, there’s reduced deployment risk and it is easier to fix whenever a problem occurs.
  • Trackable progress: By tracking work done, you can get a believable progress. If developers declaring a work to be “done”, it is less believable. But if it is deployed into a production environment, you actually see the progress right there.
  • Rapid feedback: One of the pivotal challenges of any software development is that you can wind up building something that is not useful. So, earlier you get the working software in front of real users with higher frequency, faster you get the feedback for finding out how valuable it really is.
Continuous Delivery with Drupal

Drupal Community has been a great catalyst for digital innovation. To make software development and deployment better with Drupal, the community has always leveraged technological innovations.


A session held at DrupalCon Amsterdam had an objective of bringing enterprise Continuous Delivery practices to Drupal with a comprehensive walkthrough of open-sourced CD platform called ‘Go’. The ‘Go’ project started off as ‘Cruise Control’ in 2001 rooted in the first principle of the Agile Manifesto: Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.

It outlined principles of CD practice, exhibited how easy it is to get a Drupal build up and running in Go and illustrated the merits of delivering in a pipeline. It involved setting up of a delivery pipeline. Then, configuring of build materials, build stages, build artefacts, jobs and tasks were done. Furthermore, it drilled down to familiar Drush commands and implemented the basic principles of the CD.

Basically, the build configuration was shown that deploys Drupal sites using Phing, Drush and other tools with the possibility of calling out to Jenkins as another way for managing tasks. Multiple steps of testing and approval were shown with a separate path for content staging as separate from code thereby deploying a complex Drupal site.

Later, it emphasised on testing and previewing on production before cutting over a release, zero downtime releases, secure and simple rollback options, and making the release a business decision rather than a technical decision.

Moreover, it showed that Go’s trusted artefacts can take the ambiguities out fo the build with spectacular support for administering dependencies between different projects.

This session is very useful for the developers who use Drush and have some understanding of DevOps and knows about all-in-code delivery. Even those who undertake less technical roles like QA(Quality Assurance), BA(Business Analyst) and product owner will find it beneficial as the CD practice is all about the interaction of the team as well as the tools and techniques. 

How the future of continuous delivery looks like?

A report on Markets and Markets stated that the Continuous Delivery Market was valued at USD 1.44 Billion in 2017 and would reach USD 3.85 Billion by 2023 at a Compound Annual Growth Rate (CAGR) of 18.5% during the forecast period of 2018-2023.

Open source Continuous Deliver projects and tools will dominate the commercial CD tools segment


Another report on Mordor Intelligence states that the market for Continuous Delivery is seeing a tremendous rise. It is due to the adoption of Artificial Intelligence (AI) and Machine Learning, rapid deployment of connected infrastructure and the proliferation of automated digital devices. But open source CD projects and tools will dominate the commercial CD tools segment.

The North American region is projected to have the largest growth in demand during the forecast period (2028-2023) because of the early adoption of cloud computing and IoT by the United States. The continuous evolution of new technologies (as shown above) have been the prime factor behind large-scale investments in the CD segment. Retail, healthcare, communications and manufacturing application in North America are going to see a massive growth rate in the forecast period.

Conclusion

On-demand software delivery and enhanced team collaboration is a sort of combination that every major enterprise can benefit from. Continuous Delivery is one such mechanism that can help software development projects to be production-ready always. And this can work in favour of projects involving Drupal development and deployment.

Opensense Labs has been steadfast in its goals of offering marvellous digital experience with its suite of services.

Contact us at hello@opensenselabs.com to know how can continuous delivery be implemented for your business in Drupal-based projects.

blog banner blog image continuous delivery Drupal Continuous Delivery Continuous Delivery with Drupal Continuous Integration Continuous deployment Blog Type Articles Is it a good read ? On
Categories: Drupal

Wim Leers: State of JSON API (October 2018)

Thu, 10/25/2018 - 18:22

Mateu, Gabe and I just released the first RC of JSON API 2, so time for an update!

It’s been three months since the previous “state of JSON API” blog post, where we explained why JSON API didn’t get into Drupal 8.6 core.

What happened since then? In a nutshell:

  • We’re now much closer to getting JSON API into Drupal core!
  • JSON API 2.0-beta1, 2.0-beta2 and 2.0-rc1 were released
  • Those three releases span 84 fixed issues. (Not counting support requests.)
  • includes are now 3 times faster, 4xx responses are now cached!
  • Fixed all spec compliance issues mentioned previously
  • Zero known bugs (the only two open bugs are core bugs)
  • Only 10 remaining tasks (most of which are for test coverage in obscure cases)
  • ~75% of the open issues are feature requests!
  • ~200 sites using the beta!
  • Also new: JSON API Extras 2.10, works with JSON API 1.x & 2.x!
  • Two important features are >80% done: file uploads & revisions (they will ship in a release after 2.0)

So … now is the time to update to 2.0-RC1!

JSON API spec v1.1

We’ve also helped shape the upcoming 1.1 update to the JSON API spec, which we especially care about because it allows a JSON API server to use “profiles” to communicate support for capabilities outside the scope of the spec. 1

Retrospective

Now that we’ve reached a major milestone, I thought it’d be interesting to do a small retrospective using the project page’s sparklines:

The first green line indicates the start of 2018. Long-time Drupal & JSON API contributor Gabe Sullice joined Acquia’s Office of the CTO two weeks before 2018 started. He was hired specifically to help push forward the API-First initiative. Upon joining, he immediately started contributing to the JSON API module, and I joined him shortly thereafter. (Yes, Acquia is putting its money where its mouth is.)
The response rate for this module has always been very good, thanks to original maintainer Mateu “e0ipso” Aguiló Bosch working on it quite a lot in his sparse free time. (And some company time — thanks Lullabot!) But there’s of course a limit to how much of your free time you can contribute to open source.

  • The primary objective for Gabe and I for most of 2018 has been to get JSON API ready to move into Drupal core. We scrutinized every area of the existing JSON API module, filed lots of issues, minimized the API surface, maximized spec compliance (hence also minimizing Drupalisms), minimized potential for regressions to occur, and so on. This explains the significantly elevated rate of the new issues sparkline. It also explains why the open bugs sparkline first increased.
  • This being our primary objective also explains the response rate sparkline being at 100% nearly continously. It also explains the plummeted average first response time: it went from days to hours! This surely benefited the sites using JSON API: bug fixes happened much faster.
  • By the end of June, we managed to make the 1.x branch maximally stable and mature in the 1.22 release (shortly before the second green vertical line) — hence the “open bugs” sparkline decreased). The remaining problems required BC breaks — usually minor ones, but BC breaks nonetheless! The version of JSON API that ends up in core needs to be as future proof as possible: BC breaks are not acceptable in core. 2 Hence the need for a 2.x branch.

Surely the increased development rate has helped JSON API reached a strong level of stability and maturity faster, and I believe this is also reflected in its adoption: a 50–70 percent increase since the end of 2017!

From 1 to 3 maintainers

This was the first time I’ve worked so closely and so actively on a small codebase in an open-source setting. I’ve learned some things.

Some of you might understandably think that Gabe and I steamrolled this module. But Mateu is still very actively involved, and every significant change still requires his blessing. Funded contributions have accelerated this module’s development, but neither Acquia nor Lullabot ever put any pressure on how it should evolve. It’s always been the module maintainers, through debate (and sometimes heartfelt concessions), who have moved this module forward.

The “participants” sparkline being at a slightly higher level than before (with more consistency!) speaks for itself. Probably more importantly: if you’re wondering how the original maintainer Mateu feels about this, I’ll be perfectly honest: it’s been frustrating at times for him — but so it’s been for Gabe and I — for everybody! Differences in availability, opinion, priorities (and private life circumstances!) all have effects. When we disagree, we meet face to face to chat about it openly.

In the end I still think it’s worth it though: Mateu has deeper ties to concrete complex projects, I have deeper ties to Drupal core requirements, and Gabe sits in between those extremes. Our discussions and disagreements force us to build consensus, which makes for a better, more balanced end result! And that’s what open source is all about: meeting the needs of more people better :)

API-first Drupal with multiple consumers @DrupalConNA :D pic.twitter.com/GhgY8O5SSa

— Gábor Hojtsy (@gaborhojtsy) April 11, 2018

Thanks to Mateu & Gabe for their feedback while writing this!

  1. The spec does not specify how filtering and pagination should work exactly, so the Drupal JSON API implementation will have to specify how it handles this exactly. ↩︎

  2. I’ve learned the hard way how frustratingly sisyphean it can be to stabilize a core module where future evolvability and maintainability were not fully thought through. ↩︎

Categories: Drupal

Agiledrop.com Blog: Drupal meetup in Maribor

Thu, 10/25/2018 - 13:59

Last week we organised a Drupal meetup in Maribor (the second largest town in Slovenia, where Agiledrop has the second office). As a member of Drupal Slovenia, we organised two presentations and sponsored a reception with networking after the event. Are you interested what those two lecturers were about?

READ MORE
Categories: Drupal

Ashday's Digital Ecosystem and Development Tips: Drupal Module Spotlight: Paragraphs

Thu, 10/25/2018 - 00:03

 

I really don’t like WYSIWYG editors. I know that I’m not alone, most developers and site builders feel this way too. Content creators always request a wysiwyg, but I am convinced that it is more of a necessary evil and they secretly dislike wysiwygs too. You all know what wysiwygs (What You See Is What You Get) are right? They are those nifty fields that allow you to format text with links, bolding, alignment, and other neat things. They also can have the ability to add tables, iframes, flash code, and other problematic HTML elements. With Drupal we have been able to move things out of a single wysiwyg body field into more discrete purpose-built fields that match the shape of the content being created and this has helped solve a lot of issues, but still didn’t cancel out the need for a versatile body field that a wysiwyg can provide.

Categories: Drupal

Mobomo: NOAA Fisheries and Mobomo win 2018 Acquia Engage Award

Wed, 10/24/2018 - 20:45
Award Program Showcases Outstanding Examples of Digital Experience Delivery

Vienna, VA – October 24, 2018 – Mobomo today announced it was selected along with NOAA Fisheries as the winner of the 2018 Acquia Engage Awards for the Leader of the Pack: Public Sector. The Acquia Engage Awards recognize the world-class digital experiences that organizations are building with the Acquia Platform.

In late 2016, NOAA Fisheries partnered with Mobomo to restructure and redesign their digital presence. Before the start of the project, NOAA Fisheries worked with Foresee to help gather insight on their current users. They wanted to address poor site navigation, one of the biggest complaints. They had concerns over their new site structure and wanted to test proposed designs and suggest improvements. Also, the NOAA Fisheries organization had siloed information, websites and even servers within multiple distinct offices. The Mobomo team was and (is currently) tasked with the project of consolidating information into one main site to help NOAA Fisheries communicate more effectively with all worldwide stakeholders, such as commercial and recreational fishermen, fishing councils, scientists and the public. Developing a mobile-friendly, responsive platform is of the utmost importance to the NOAA Fisheries organization. By utilizing Acquia, we are able to develop and integrate lots of pertinent information from separate internal systems with a beautifully designed interface.

“It has been a great pleasure for Mobomo to develop and deploy a beautiful and functional website to support NOAA fisheries managing this strategic resource. Whether supporting the work to help Alaskan Native American sustainable fish stocks, providing a Drupal-based UI to help fishing council oversight of the public discussion of legislation, or helping commercial fishermen obtain and manage their licenses, is honored help NOAA Fisheries execute its mission.” – Shawn MacFarland, CTO of Mobomo 

More than 100 submissions were received from Acquia customers and partners, from which 15 were selected as winners. Nominations that demonstrated an advanced level functionality, integration, performance (results and key performance indicators), and overall user experience advanced to the finalist round, where an outside panel of experts selected the winning projects.

“This year’s Acquia Engage Award nominees show what’s possible when open technology and boundless ambition come together to create world-class customer experiences. They’re making every customer interaction more meaningful with powerful, personalized experiences that span the web, mobile devices, voice assistants, and more,” said Joe Wykes, senior vice president, global channels at Acquia. “We congratulate Mobomo and NOAA Fisheries and all of the finalists and winners. This year’s cohort of winners demonstrated unprecedented evidence of ROI and business value from our partners and our customers alike, and we’re proud to recognize your achievement.”

“Each winning project demonstrates digital transformation in action, and provides a look at how these brands and organizations are trying to solve the most critical challenges facing digital teams today,” said Matt Heinz, president of Heinz Marketing and one of three Acquia Engage Award jurors. Sheryl Kingstone of 451 Research and Sam Decker of Decker Marketing also served on the jury.

About Mobomo

Mobomo builds elegant solutions to complex problems. We do it fast, and we do it at a planetary scale. As a premier provider of mobile, web, and cloud applications to large enterprises, federal agencies, napkin-stage startups, and nonprofits, Mobomo combines leading-edge technology with human-centered design and strategy to craft next-generation digital experiences.

About Acquia

Acquia provides a cloud platform and data-driven journey technology to build, manage and activate digital experiences at scale. Thousands of organizations rely on Acquia’s digital factory to power customer experiences at every channel and touchpoint. Acquia liberates its customers by giving them the freedom to build tomorrow on their terms.

For more information visit www.acquia.com or call +1 617 588 9600.

###

All logos, company and product names are trademarks or registered trademarks of their respective owners.

The post NOAA Fisheries and Mobomo win 2018 Acquia Engage Award appeared first on .

Categories: Drupal

Palantir: University of California Berkeley Extension

Wed, 10/24/2018 - 20:20
University of California Berkeley Extension brandt Wed, 10/24/2018 - 11:20

How we helped UC Berkeley Extension reduce the cost of student enrollment.

extension.berkeley.edu Streamlined Enrollment to Nurture Students in Their Journeys On

UC Berkeley Extension (Extension) is the continuing education branch of the University of California Berkeley. Extension offers more than 2,000 courses each year, including online courses, as well as more than 75 professional certificates and specialized programs of study.

Extension knew their site was significantly behind what they needed students’ user experience to be, and they needed assistance in simplifying enrollment. While preparing for a redesign of their website, Extension approached Palantir as a subject matter expert on website redesign who could also help to user-test their new information architecture and design and also conduct user research in order to recommend revisions that would help them improve enrollment conversions on future iterations of the site. The ultimate goal was to make it easier for students to continue their educational journey at Extension.

Reducing The Cost of Student Enrollment

UC Berkeley Extension has over 40,000 student enrollments a year. Previous to their engagement with Palantir, it took 127 web sessions between the first visit and enrollment.

In the first three months after implementing Palantir’s recommendations, that number decreased 33% to only 82.5 web sessions needed to secure an enrollment. By decreasing this number, Extension was able to capture more revenue per web session, increasing the average from $6.08 to $10.68 per session.

Here’s How We Did It

Because Extension had already done significant market research, we quickly nailed down the key goals of the project and how we would define success.

We identified a two-prong approach:

  1. Validate their recent site redesign and new information architecture through virtual and in-person user testing; and
  2. Conduct user research, and create and validate wireframes to support their execution of a future redesign.

Palantir came in as the subject matter experts on the re-design of our multi-million dollar e-commerce web site. They exceeded expectations on every measure. We then re-hired them for a subsequent project. We recommend Palantir highly.

Jim Kaczkowski

Marketing Manager, University of California Berkeley Extension

Our Methods

In order to move the needle on business outcomes, methods must be backed with real, actionable insights and data. For Extension, this meant developing a deep understanding of their users’ behavior and motivations.

First, we defined key audience segments and generated personas and user journeys. Then, we validated the way that each segment interacts with the site through menu testing and in-person usability testing. This user research gave us direct and applicable insights which established the foundation for what kinds of features prospective students need and expect from the site.

We continued our exploration of audience needs by conducting a competitive analysis of six competitor sites in the higher, continuing, and online education space. Outcomes of this research revealed that students need more cues before they make a decision about enrolling in a course and before they take a deeper dive into a program or course page.

Questions like: “Is the course open or closed?” “Is there a waitlist?” and “Is it at a location convenient to me?” linger in a student’s mind.

Based on the competitive analysis, audience definition, in-person usability testing, and menu testing, Palantir developed a set of wireframes to support Extension’s upcoming redesign.

These outlined many of the key priorities that surfaced throughout the project, such as:

  • Simplifying the Student Services landing page Surfacing content that supports the offerings of the courses and programs (e.g. instructor expertise and alumni success)
  • Making information about career outcomes more prominent

But the testing didn’t stop there. Once wireframes were created, we validated them further by conducting a final set of first-click tests, designed to help identify and close gaps between the designs and what the audience members wanted to do on the site.

The strategy work we did allowed Extension to gain a better sense of the needs and pain points of their audience and revealed a handful of key points for them to address:

  • The Extension site needed a more extensive faceted search.
  • Extension needed to work with the institution to reposition and rebrand the Student Services department as a key advocate for incoming, current and returning students.
  • Extension needed to modify its messaging to better surface the qualities of its curriculum, flexibility and affordability, along with instructor expertise so that prospective students could quickly get a sense of the value of the education and academic offerings.

Palantir helped to shape the future evolution of the Extension website by equipping the UC Berkeley team with a set of user experience tools and methods they continue to utilize. The user-research compiled throughout the engagement continues to focus an intention in their design as they undertake new website projects, always with the student journey top of mind.

As our advice is continuously implemented, the results of Palantir’s work are clear: fewer dropped sessions, fewer questions and calls to the Registrar’s office about things that couldn’t before be found on the website, and a 75% increase in revenue per session.

Categories: Drupal

TEN7 Blog's Drupal Posts: Episode 042: DrupalCorn 2018

Wed, 10/24/2018 - 19:47
It is our pleasure to welcome Tess Flynn to the TEN7 podcast to discuss attending the 2018 DrupalCorn and presenting "Dr. Upal Is In, Health Check Your Site". Tess is TEN7's DevOps engineer. Here's what we're discussing in this podcast: DrupalCorn2018; DrupalSnow; Camp scheduling; What it takes to put on a camp; Unconference the conference; Substantive keynotes; Dr. Upal is now in; The good health of your website is important; It takes humans and tools; Every website is a bit like a person, it’s a story; Docker-based Battle Royale; Auditing the theme; Mental health and tech; Drupal 8 migration; A camp with two lunches; Loaded baked potatoes and corn; Cornhole; Catching Jack the Ripper; Onto DrupalCamp Ottawa
Categories: Drupal

MTech, LLC: Troubleshooting a Drupal 8 Migration

Wed, 10/24/2018 - 18:21
Troubleshooting a Drupal 8 Migration

A day doesn't go by that someone isn't asking a question in Slack #migration about how to troubleshoot a specific problem with a tricky migration. Almost universally, these problems be demystified by using Xdebug and putting breakpoints in two spots in Core's MigrateExecutable. First is in the ::import() method where it rewinds the source and then processes it. The second place I regularly put a breakpoint is in ::processRow().

heddn Wed, 10/24/2018 - 08:21
Categories: Drupal

Pages