SurviveJS

Follow this blog

Blitz.js - The Full-stack React Framework - Interview with Brandon Bayer

Although React is a UI library, that doesn't mean you couldn't write full-stack applications around it. Frameworks, such as Next.js, have appeared to make it easier. To learn more about one such framework, Blitz.js, I am interviewing Brandon Bayer. Can you tell a bit about yourself?# Brandon Bayer I'm the creator and full-time maintainer of Blitz.js. I have been a professional developer since 2010. In 2017 I quit my job to do full-time consulting. That was one of the best decisions I have ever made. When I started Blitz, I cut back my consulting hours to 20 per week or less. That allows me to work on Blitz while still making a living. Outside of work, I love aviation. I got my pilot's license right after I turned 17 years old. I haven't flown in a few years, but I'm hoping to get back in the air as soon as possible. How would you describe Blitz.js to someone who has never heard of it?# It's the equivalent of Ruby on Rails or Laravel but for JavaScript and React. It's built on top of Next.js so that you get everything excellent about Next.js plus everything else you need for building a full-stack app. To learn about Next.js, read the interview with Arunoda Susirapala. How does Blitz.js work?# Fundamentally it works the same as Next.js. However, Blitz adds a novel new "zero-API" data layer, which abstracts the API layer into a compile step. As a developer, you don't need to mess with REST or GraphQL APIs. Blitz lets you write functions that run on the server, import them into your React components, and they will work like magic. Server and client How does Blitz.js differ from other solutions?# RedwoodJS is the only other alternative trying to solve the same problems. They take a different approach. Instead of abstracting away the API, they keep the GraphQL API layer and double down on making it easy to use. Also, Blitz has built-in user email/password authentication, and it's already set up for you by default out of the box. So you never have to set up auth again if you use Blitz. Why did you develop Blitz.js?# I developed Blitz because I was so frustrated with having to manage APIs and data fetching. It's incredible how much the API layer slows you down and causes problems. People are saying Blitz makes you 5-10x more productive because of this. And then also the headache of choosing libraries and getting them all to work together. I wanted a battery-included framework like Ruby on Rails. What does the future look like for Blitz.js and web development in general? Can you see any particular trends?# Full-stack development with JavaScript/TypeScript and React is going to continue to get easier and faster. We've already made such a difference with Blitz. But there's a TON more we can and want to do. It's still early days for full stack web dev. Serverless full stack with Blitz is still the wild west right now. But I fully expect this to improve over the next couple of years dramatically. What advice would you give to programmers getting into web development?# I recommend working at an agency at some point, so you get more varied experience than working on a single enterprise product. Also don't stay at the same job for too long. Ideally, switch jobs every couple of years, so you gain more experience and because you can raise your salary much easier by changing companies. Who should I interview next?# Colin McDonnell. Any last remarks?# Go to blitzjs.com to learn more about Blitz and take it for a test drive. TLDR is run npm i -g blitz and then blitz new myAppName, and in just a couple minutes, you'll have a new Blitz app running, complete with user sign up, log in, log out, and forgot password all working! If you want to keep up with what I'm working on, follow me at @flybauer. Conclusion# Thanks for the interview, Brandon! I agree the full-stack experience with React can be improved further, but it's pretty impressive what you've done so far. To get started with Blitz, consider the following resources: Video: Complete intro to Blitz Blitz Discord @blitz_js Blitz on GitHub

Rubico - [a]synchronous functional programming - Interview with Richard Tong

One of the tricky parts of JavaScript is dealing with asynchronous behavior. The language itself has introduced improved syntax (async/await) and utilities like Promise.all and Promise.race but it's not enough for more advanced use cases. To learn about a potential solution, I am interviewing Richard Tong about a new library called Rubico. Can you tell a bit about yourself?# Richard Tong I am a programmer based in Los Angeles who enjoys solving problems with JavaScript. Currently, I am working on Claimyr - the quickest way to speak with an unemployment agent. In my spare time, I enjoy going out to eat and getting coffee. How would you describe Rubico to someone who has never heard of it?# Rubico is a set of functions that supports a simple and expressive way of programming in JavaScript. With Rubico, you can reduce a ton of boilerplate surrounding Promise handling in your code, allowing you to focus on writing business logic and shipping quickly. Rubico is geared towards ES2018+, requiring syntax for async generator functions. How does Rubico work?# Use Rubico's operators to create async-enabled compositions of functions. Each operator handles Promise resolution for you. Consider the following example: const { pipe, map, filter } = rubico; const isOdd = (number) => number % 2 == 1; const asyncSquare = async (number) => number ** 2; const squaredOdds = pipe([ // each asyncSquare Promise is resolved before filter map(asyncSquare), filter(isOdd), ]); squaredOdds([1, 2, 3, 4, 5]).then(console.log); // [1, 9, 25] At the moment, Rubico supports the following functions: const { pipe, tap, switchCase, tryCatch, fork, assign, get, pick, omit, map, filter, reduce, transform, flatMap, and, or, not, any, all, eq, gt, lt, gte, lte, thunkify, always, curry, __, } = rubico; You can create full applications with just pipe and tap. Usually I recommend people start with these two. Use pipe to chain a bunch of functions (sync or async) together, then use tap to specify any "side-effecting" functions, i.e., functions that shouldn't contribute to the main flow, such as writing to a file or database. The rule is pretty arbitrary and will always be as pure as your best effort. Here's a setup you could get started with that I've been using for my HTTP handlers: const MyHttpHandler = ({ dependencyA, dependencyB, myConfigValue, }) => (request, response) => tryCatch(pipe([ always(request) transform(map(chunk => chunk), Buffer.from('')), callProp('toString', 'utf8'), JSON.parse, // { parameterA: 'hey', parameterB: 100 } ]), error => { console.error(error.message) response.writeHead(error.code ?? 500, { 'Content-Type': 'text/plain', }) response.end(error.message) })() // Initialize dependencies, grab config values... http.createServer(MyHttpHandler({ dependencyA, dependencyB, myConfigValue, })).listen(3000) If you are interested in getting started with Rubico, I recommend taking the tour and then glancing over all the functions at the docs. Try to master the core API first, then move on to the advanced functions in rubico/x. How does Rubico differ from other solutions?# Rubico is comparable to Lodash FP, Ramda, Bluebird, and RxJS libraries. All five libraries are competing in the utility space, though with differing core principles/ideologies. I've compared them briefly below to show you the differences: Rubico vs Lodash FP:# Lodash FP - immutable, auto-curried, iteratee-first, and data-last methods Rubico - mutable, uncurried, promise-resolving, iteratee-first, and data-last methods Rubico vs Ramda:# Ramda - immutability, and side-effect free functions are at the heart of its design philosophy Rubico - composability, performance, and simplicity are at the heart of its design philosophy Rubico vs Bluebird:# Bluebird - built around Promises. Utility operators focus on Promise handling. Rubico - built around async functions. Utility operators focus on async function composition. Rubico vs RxJS:# RxJS - a library for composing asynchronous and event-based programs by using observable sequences Rubico - a library for composing asynchronous and event-based programs with async functions Similarities# Rubico, Lodash FP, and Ramda all have a placeholder operator __. Rubico's __ can be used in conjunction with Rubico's curry to create new functions from existing ones by fixing some of the arguments. Lodash FP and Ramda don't need the curry function as much because their functions come as auto-curried. Rubico does not curry automatically for performance reasons and instead exports higher-order functions with fixed signatures. const { curry, __ } = require("rubico"); const R = require("ramda"); const _ = require("lodash/fp"); const add = (a, b) => a + b; // rubico const add3Rubico = curry(add, __, 3); add3Rubico(5); // 8 // ramda const add3Ramda = R.curry(add)(R.__, 3); add3Ramda(5); // 8 // lodash/fp const add3Lodash = _.curry(add)(_.__, 3); add3Lodash(5); // 8 Both Rubico and Bluebird provide an asynchronous pooling option. With Rubico, you can specify an asynchronous limit while applying an async function to each item of a collection via the property function map.pool. Bluebird enables pooling functionality via the concurrency option on Bluebird's Promise.map. const Promise = require("bluebird"); const { map } = require("rubico"); const sleep = (ms) => new Promise((resolve) => setTimeout(resolve, ms)); // bluebird Promise.map( [1, 2, 3, 4, 5], async function asyncSquare(number) { console.log("squaring", number); await sleep(500); return number ** 2; }, { concurrency: 2 } ); // rubico map.pool(2, async function asyncSquare(number) { console.log("squaring", number); await sleep(500); return number ** 2; })([1, 2, 3, 4, 5]); Why did you develop Rubico?# Initially, I developed Rubico because I needed a function that could chain async functions together in a data-last fashion. Then I wondered about other ways in which async functions could be composed. The rest went from there. What next?# I'm building Claimyr with Rubico and a couple of other libraries I'm working on: Presidium and Arche. Presidium provides a type system that addresses the complete set of needs of a back-end Node.js application architect: from handling HTTP to working with Amazon Web Services like DynamoDB or S3, to deploying on your in-house docker swarm. Arche is a simple wrapper over React, enabling a declarative interface for working with React without the need for transpilation. These libraries and others contribute to high-quality software development at Claimyr. What does the future look like for Rubico and web development in general? Can you see any particular trends?# Rubico has a long roadmap - it is just getting started. There's still a lot of cool and useful asynchronous behaviors yet to be implemented. For example, reduce.parallel could apply an asynchronous reducer in parallel to a possibly infinite or asynchronous source. I think it's hard to predict exactly where we'll end up in the next few years or next year even. Innovation happens every day, for sure - chances are you'll be using new software a year from now. What advice would you give to programmers getting into web development?# If you feel like you are struggling, keep at it for as long as you can, then get a good night's sleep. Chances are, you will grasp it a little better the next day. Who should I interview next?# Thomas Wang, the co-founder of @Napkin. Any last remarks?# Thanks for giving me the time. It's an exciting time to be a web developer. If you are interested in contributing to Rubico or any of my other projects, or even just learning about how to build for the web, please reach out to me via email. Conclusion# Thanks for the interview, Richard! Rubico looks like a great solution for handling any complex asynchronous case and I hope you keep improving it. You can find rubico on npm and on GitHub.

PropagateAt - Talk to your greatest fans via text - Interview with Kumar Abhirup

Developing your products is both exciting and challenging. There's both the business side and the development side to worry about and to manage. Both have to be on a good enough level for the product to work. In this interview, we'll learn about a product called PropagateAt by Kumar Abhirup. It's the first product he has developed, so it's interesting to hear what he has to say about the product and the experience! Can you tell a bit about yourself?# Kumar Abhirup Hi, I am Kumar. I am in 11th grade, coding since age 12, currently building PropagateAt, the Substack for SMS. You can find me online at the following places: GitHub: https://github.com/KumarAbhirup Website: https://kumarabhirup.me dev.to: https://dev.to/kumar_abhirup Medium: https://medium.com/@kumar_abhirup LinkedIn: https://linkedin.com/in/kumar-abhirup/ How would you describe PropagateAt to someone who has never heard of it?# I call it, the Substack for SMS. Substack is a writing platform oriented for independent writers and exclusive access to their content! PropagateAt is a similar tool, except it has been built around SMS. It allows creators to create paid texting communities, chat, send newsletters, and connect to their fans one-on-one using a US phone number. A creator who wants to start his/her own paid Text Message newsletter first signs up with PropagateAt. The creator then gets a new personal US number they can advertise anywhere. For example, "Hey, text me here!". After this, the fans will text the number, and if the newsletter is free, they will be subscribed right away. If the newsletter was a premium one, the fans get an auto payment link, and after the payment, they get subscribed to the newsletter at a monthly cost. These fans then get the privilege to chat via SMS with their favorite influencers! It works just like Substack, paid newsletters, but on SMS. Whatever the creators earn, they keep 85% of it. They can connect their Stripe accounts and withdraw the money earned anytime! How does PropagateAt differ from other solutions?# There are many companies building SMS solutions. Most of them are for corporates and companies to send unsolicited marketing spam or so. There are many MailChimp for SMS services out there already. The closest (and you could say the biggest) competitor to us is Community. It has been made for really famous creators who are sending bulk text messages to their fans. But, Creators cannot use it like a paid blogging platform and cannot monetize their text newsletter effectively. Also, Community is very costly, so that medium-scale influencers mostly cannot afford it. What makes PropagateAt different is its appeal to creators and influencers of all scales and giving them a platform and a billing infrastructure to help them earn money over SMS by sending premium content to their fans, on a monthly subscription basis, at a lower price. Why did you develop PropagateAt?# My mother had 10k subscribers on WhatsApp. To automate her tedious WhatsApp process, I tried to create a WhatsApp bulk newsletter service with Twilio. Then I realized that there are a ton of Facebook restrictions with it. Do you know what does not have such restrictions? It's SMS! And it's barely touched by creator-friendly marketing companies. Also, creators and influencers earning their income over SMS content is a new thing and is a vastly uncovered idea market. After I understood the potential of SMS marketing for creators, I decided to make PropagateAt! What next?# I am currently building and growing the product (MVP at the moment) by slowly giving users access to the dashboard and new features to ensure everything goes smooth. We have an AppSumo launch scheduled, then will launch on Product Hunt, and we'll also apply to Y Combinator. I am mostly over calls with influencers these days, and they love the idea, and I am slowly onboarding them over to PropagateAt. What does the future look like for PropagateAt and web development in general? Can you see any particular trends?# For PropagateAt, there is a massive redesign coming, so yea, stay tuned, haha. When I started building the product back in 2020, I didn't make the architecture scalable. The target of the work is to improve this aspect over time. In the upcoming years, web development will probably get easier with all these no-code tools coming up. What technical challenges did you encounter while developing PropagateAt?# I mean, a lot. While building the MVP for 7-8 months, I faced many issues. And I still do. Information security is hard# One biggest technical challenge for me was to keep everything secure, and I am not going to lie, I am not good at CyberSec. Server-side rendering is challenging# The second thing I am struggling with is making the current stack work with Apollo GraphQL server-side rendering (SSR). At the moment, all the queries are being made client-side, and I want it to be done on the server-side for better link previews, for example. I am using Next.js, so it should be easy, right? But I guess there's something that's not right, and I am still trying to figure it out. Payments are complex# Currently, the service works on Twilio internally. I am figuring out a way to streamline the money flow from clients to PropagateAt to Twilio Balance. Now the process is all manual so that when a client pays, I get a notification, and then I add a top-up in Twilio. I want to make it seamless where it automatically tops-up the Twilio Account balance when a new user pays, but I am still figuring out how to do that. Stripe regulations in India are complex. Stripe Express Connect Account does not work for Indian Merchants, and therefore we have to resort to a secondary Connect Account method. I am using Stripe Connected Accounts service to pay creators what they earn via PropagateAt. It currently does not work well because to send money to creators your Stripe wallet needs to be filled. But for Indian Stripe Accounts, the money gets withdrawn to banks as soon as it comes, which means Stripe cannot be used as a wallet in India, making it harder for creators to get paid. Currently, to solve this issue, we manually pay creators via PayPal. And later, to fix this issue, all we need is a Stripe Atlas incorporation. What advice would you give to programmers getting into web development?# Just build something! I am not great at coding, but when it comes to learning new things, I can hack and develop stuff my way, I can do it pretty well! I used to be in a tutorial purgatory back in 2019, and I am glad I got out of it. I would recommend beginners to log on to FreeCodeCamp, get their JavaScript course, try it out, and start building small projects! Your portfolio will be your first great start! Your aim at the beginner stage should mainly be that, after 1-2 years, you could be able to build a working full-stack app. That will be a nice goal to have in the early stages. The ability to develop whatever you want is a great skill, and I would always choose it over anything else. Who should I interview next?# Digvijay Singh Rathore. He is multi-talented, better than me at everything. Any last remarks?# The questions were great! They really got me thinking! Conclusion# Thanks for the interview, Kumar! I find it admirable you built your product and found a growing market for it. I wish great success for your business! To learn more about the product, head to PropagateAt site.

GruCloud - Infrastructure as Code in JavaScript - Interview with Frederic Heem

For me, it's always amazing how complex infrastructure it's possible to configure these days. If you master a platform like AWS or GCP, you can do quite a lot. Earlier it took a lot of expertise and hardware to pull this off, and these days you can go to a suitable service and manage it all through a web interface. As a JavaScript programmer, it's interesting for me you can achieve the same but using a language I know. Frederic Heem has developed a new solution called GruCloud that does this. Can you tell a bit about yourself?# Frederic Heem I have been developing software for the last 20 years, mainly in the UK and Italy. Lately, the focus has been on JavaScript: frontend, backend, and mobile. In case you want to learn more of Frederic's work, I interviewed him a few years back about his starter kit for React and Node. How would you describe GruCloud to someone who has never heard of it?# GruCloud is an Infrastructure as Code tool. DevOps people can write in JavaScript a description of their cloud infrastructure. Then they use the GruCloud CLI to manage the deployment, update, and destruction of such infrastructure. By cloud infrastructure, we mean servers, public IP addresses, DNS settings, file storage, and so on. At the moment, GruCloud interfaces with AWS, Google Cloud, and Microsoft Azure. How does GruCloud work?# First of all, we describe infrastructure in JavaScript, for example, a virtual machine on Google Cloud: const { GoogleProvider } = require("@grucloud/core"); exports.createStack = async ({ config }) => { const provider = GoogleProvider({ config }); const server = await provider.makeVmInstance({ name: `webserver`, properties: () => ({ diskSizeGb: "20", machineType: "f1-micro", sourceImage: "projects/ubuntu-os-cloud/global/images/family/ubuntu-2004-lts", metadata: { items: [ { key: "enable-oslogin", value: "True", }, ], }, }), }); return { provider, }; }; The next step is to use the GruCloud command-line interface gc to deploy, list, update and destroy the server. In the first step, planning, the tool performs the following steps: It queries the cloud providers to find out which resources are already deployed Compares to what should be installed according to the code Produces a plan of what needs to be deployed or removed $ gc apply Querying resources on 1 provider: google ✓ google ✓ Initialising ✓ Listing 16/16 ✓ Querying ✓ VmInstance 1/1 ┌──────────────────────────────────────────────────────────────────────┐ │ 1 VmInstance from google │ ├───────────┬──────────┬───────────────────────────────────────────────┤ │ Name │ Action │ Data │ ├───────────┼──────────┼───────────────────────────────────────────────┤ │ webserver │ CREATE │ kind: compute#instance │ │ │ │ name: webserver │ │ │ │ zone: projects/grucloud-e2e/zones/southameri… │ │ │ │ machineType: projects/grucloud-e2e/zones/sou… │ │ │ │ labels: │ │ │ │ managed-by: grucloud │ │ │ │ stage: dev │ │ │ │ metadata: │ │ │ │ items: │ │ │ │ - key: enable-oslogin │ │ │ │ value: True │ │ │ │ kind: compute#metadata │ │ │ │ disks: │ │ │ │ - kind: compute#attachedDisk │ │ │ │ type: PERSISTENT │ │ │ │ boot: true │ │ │ │ mode: READ_WRITE │ │ │ │ autoDelete: true │ │ │ │ deviceName: webserver-managed-by-gru │ │ │ │ initializeParams: │ │ │ │ sourceImage: projects/ubuntu-os-cloud/… │ │ │ │ diskType: projects/grucloud-e2e/zones/… │ │ │ │ diskSizeGb: 20 │ │ │ │ diskEncryptionKey: │ │ │ │ networkInterfaces: │ │ │ │ - kind: compute#networkInterface │ │ │ │ subnetwork: projects/grucloud-e2e/region… │ │ │ │ accessConfigs: │ │ │ │ - kind: compute#accessConfig │ │ │ │ name: External NAT │ │ │ │ type: ONE_TO_ONE_NAT │ │ │ │ networkTier: PREMIUM │ │ │ │ aliasIpRanges: [] │ │ │ │ displayDevice: │ │ │ │ enableDisplay: false │ │ │ │ canIpForward: false │ │ │ │ scheduling: │ │ │ │ preemptible: false │ │ │ │ onHostMaintenance: MIGRATE │ │ │ │ automaticRestart: true │ │ │ │ nodeAffinities: [] │ │ │ │ deletionProtection: false │ │ │ │ reservationAffinity: │ │ │ │ consumeReservationType: ANY_RESERVATION │ │ │ │ shieldedInstanceConfig: │ │ │ │ enableSecureBoot: false │ │ │ │ enableVtpm: true │ │ │ │ enableIntegrityMonitoring: true │ │ │ │ confidentialInstanceConfig: │ │ │ │ enableConfidentialCompute: false │ │ │ │ │ └───────────┴──────────┴───────────────────────────────────────────────┘ ┌─────────────────────────────────────────────────────────────────────┐ │ Plan summary for provider google │ ├─────────────────────────────────────────────────────────────────────┤ │ DEPLOY RESOURCES │ ├────────────────────┬────────────────────────────────────────────────┤ │ VmInstance │ webserver │ └────────────────────┴────────────────────────────────────────────────┘ ? Are you sure to deploy 1 resource, 1 type on 1 provider? › (y/N) In the second step, performing the deployment, the tool calls the cloud providers' API to create, updates or deletes resources according to the plan produced in the previous step: Deploying resources on 1 provider: google ✓ google ✓ Initialising ✓ Deploying ✓ VmInstance 1/1 1 resource deployed of 1 type and 1 provider Running OnDeployed resources on 1 provider: google ✓ google ✓ Initialising Command "gc apply" executed in 2m 5s At this stage, the server should be up and running. To check the state of the deployment, you can use the list command. gc list To reduce your cloud provider bills, destroy the infrastructure: gc destroy Things can get a little bit more complex with more resources and the dependencies between them. The gc graph command produces a graph displaying the infrastructure: graph GruCloud takes care of creating and destroying the resources in the right order. How does GruCloud differ from other solutions?# The two other solutions on the market are Terraform and Pulumi. With Terraform, the infrastructure code is written in Domain Specific Language called HCL, as opposed to Pulumi and GruCloud, where you use JavaScript, a General Purpose Language. The other difference is in the implementation of the numerous resources for the various cloud providers. JavaScript for GruCloud and Go for Terraform and Pulumi. Why did you develop GruCloud?# Infrastructure as Code is gaining traction and popularity. At the moment, there is no tool entirely written in JavaScript, and GruCloud is filling this gap. What next?# Many resources on various cloud providers need to be implemented. For example, AWS Lambda and Kubernetes are missing at the moment. Fortunately, adding a new simple resource is relatively easy. Complete with testing and documentation, it should take about a day on average. What does the future look like for GruCloud and web development in general? Can you see any particular trends?# Many companies are moving from owning their equipment to renting from cloud providers. So the pool of potential users is still increasing. The future is bright. What advice would you give to programmers getting into web development?# Stay focus on one problem at a time, and finish it. Who should I interview next?# Richard Tong, the author of rubico, an asynchronous functional programming library heavily used by GruCloud. Any last remarks?# For more information about this project, visit GruCloud Conclusion# I find it highly interesting it's possible to define and orchestrate infrastructure by using JavaScript alone. That means I don't have to become an expert at navigating a particular provider's user interface, which seems like a win. Head to GruCloud site to learn more about the solution, star them on GitHub, and follow them on Twitter.

Renderlesskit React - Collection of composable headless hooks - Interview with Anurag Hazra

When developing user interfaces with React, you often create a set of basic primitives. Another option is to consume them from a third-party library and perhaps build more complex components yourself. To learn about a potential solution to the problem, I am interviewing Anurag Hazra. Can you tell a bit about yourself?# Anurag Hazra Hi! My name is Anurag, and I'm a frontend engineer from India currently working at timelessco. I enjoy building interactive user interfaces and also love to do creative coding in my free time. I love to contribute to open source projects and even created a few of my own. How would you describe Renderlesskit React to someone who has never heard of it?# Renderlesskit React is a component library that's focusing on flexibility, reusability, and accessibility. We are developing the solution at timelessco and I have been working with my colleague Navin Moorthy on this project for the past six months. It's a renderless component library. The solution handles behaviors, logic, and accessibility via React hooks. The approach enables the consumers to have full control over styling while tackling the other concerns. Check out our Storybook preview How does Renderlesskit React work?# Under the hood, Renderlesskit uses Reakit created by Diego Haz. The toolkit comes with excellent base components and helpful utilities for handling accessibility and hooks based component systems. There are two parts for creating any components in Reakit: component hook and state hook. Component Hook# The component hook handles all the logic which the component needs and returns the component's accesibility logic and HTML props. We can also add event listeners/refs/a11y props in this hook too. Reakit provides a wonderful createHook function which is used to create the hooks. The cool thing is that we can even combine multiple hooks together to supercharge our components easily. It looks something like this: export const useAccordionPanel = createHook< AccordionPanelOptions, AccordionPanelHTMLProps >({ name: "AccordionPanel", // We can compose with other hooks compose: [unstable_useId, useDisclosureContent], // Keys are generated automatically keys: ACCORDION_PANEL_KEYS, useProps(options, { ref: htmlRef, ...htmlProps }) { const accordionId = getAccordionId(options); // Add event listeners, do a11y logic, and other things return { "aria-labelledby": accordionId, ...htmlProps, }; }, }); Reakit also provides a createComponent function that connects the hook to a React component. State Hook# The state hook is the main hook which handles all of the component state. It's a plain custom React hook: const useAccordionState = (props) => { // logic return { ...state }; }; Then we can create our components by combining those two hooks and use them in our app: const App = () => { const state = useAccordionState(); return ( <Accordion {...state}> <AccordionTrigger {...state}> Trigger 1 </AccordionTrigger> <AccordionPanel {...state}>Panel 1</AccordionPanel> <AccordionTrigger {...state}> Trigger 2 </AccordionTrigger> <AccordionPanel {...state}>Panel 2</AccordionPanel> </Accordion> ); }; With this approach, the benefit we get is that all our components are hook based and completely unstyled. The users can compose one or multiple hooks together to extend the system, just like Lego bricks. You can learn how Reakit works from the interview of Diego Haz. How does Renderlesskit React differ from other solutions?# The main difference is that unlike traditional component libraries, Renderlesskit does not have any styling opinions nor does it depend on any CSS frameworks while providing as much flexibility as possible. It also differs from other libraries in the aspect of the core system. We are only extending the Reakit component ecosystem using reakit-system. It is similar to react-aria but uses reakit-system to achieve the same behaviors. There are other similar hooks/component libraries out there that solve the issue nicely too. Consider the examples below: Bumbag radix-ui downshiftjs headlessui react-spectrum These are all amazing libraries which are very similar at core, but what we wanted is an in-house component library at timelessco to use as a foundation for our next design system. You can read more about our core concepts online. Why did you develop Renderlesskit React?# Our CEO, Sandeep Prabhakaran, came up with the idea as we needed an in-house component library which we can manage and support our requirements in the future. The hope is that we can build a UI library and ultimately to create a nocode design system designer. What next?# The next step is to create an in-house design system for our company using Renderlesskit itself. We have already started working on it, and we are using Tailwind for styling and the solution as a base. After finishing the design system, we plan on creating a system for our design tokens in Figma so that these tokens can be in sync with our code via a common theme specification. We are still researching this topic. You can find the source code of our design system here: @renderlesskit/react-tailwind What does the future look like for web development in general? Can you see any particular trends?# In general, I'm seeing a trend in web development which is "rediscovering lost patterns". We are now leaning towards making things less complicated and using the platform. For example: Tailwind rediscovered the way we use CSS. React server components rediscovered the way we write server-side code. I also think that tooling will become more and more sophisticated and faster in the future. We already have Snowpack, swc, Rome, and Vite doing excellent work on pushing the limits. What advice would you give to programmers getting into web development?# The only advice I can give them is to stay focused, don't feel overwhelmed, start slow, and keep trying! :) Learn the fundamentals first and then jump to more advanced topics later. Grasping the fundamentals goes a long way. Who should I interview next?# Kumar Abhirup is doing excellent work on his project propagate.at. Any last remarks?# From the past few month we worked on this project and learned a lot about accessibility and building a component library. Renderlesskit would not have been possible without the following people: My colleague Navin Moorthy's excellent skills Sandeep Prabhakaran's ideas and logical thinking. Amazing reakit by Haz chakra-ui by Segun Devon Govett for his amazing work on react-spectrum which we also took inspiration from. Conclusion# Thanks for the interview, Anurag! I like the approach you chose as it avoids the pitfalls of coupling styling to components. I've seen this cause trouble many times. To learn more, check out Renderlesskit React on GitHub.

PixelCraft - a Pixel Art Editor - Interview with Abhishek Chaudhary

It's cool to write small web applications for the sake of learning. You can set the boundaries yourself and experiment with new technologies. Abhishek Chaudhary developed a pixel art editor to learn, and in this interview he'll give an overview of the tool. Can you tell a bit about yourself?# Abhishek Chaudhary I am a web developer who likes making web applications. I create apps that are fun and useful. I also enjoy contributing to open source. I have a portfolio online at theabbie.github.io and you can find my articles at freeCodeCamp How would you describe PixelCraft to someone who has never heard of it?# PixelCraft is a pixel art editor; you can create pixel art images and animated GIFs using this tool. It is effortless to use and is a progressive web app, i.e., you can use it entirely within a web browser on any device. How does PixelCraft work?# PixelCraft uses HTML5 canvas to draw graphics on a webpage. You can create all sorts of pixel arts using it. For creating animated GIFs, it uses gif.js, and it then renders GIF within the browser. To learn more about the technical details of the application, read Abhishek's code oriented article about PixelCraft. What challenges did you have while developing PixelCraft?# The biggest challenge was making it responsive, i.e., compatible on all screen sizes; this is challenging because there's less space available on mobile devices. At the same time, we needed to utilize the large space available on the desktop. Another challenge was to keep the code-base maintainable, so it's easy to add new features. At the same time, the source remains readable for the wider community. How does PixelCraft differ from other solutions?# It differs in terms of simplicity and ease of use. Most other pixel art editors require some practice to master them, PixelCraft is easy to use compared to other options. It is also mobile-friendly so that you can use it on your phone too. Alternatives: Lospec, Pixilart, and Piskel. Why did you develop PixelCraft?# Because it was a fun project and I wanted to build an easy to use pixel art editor which people could play around with. It has easy to understand design and is compatible with all screen sizes. What next?# We plan to add many features to PixelCraft to make it more useful without losing its simplicity, adding support for useful editing features like layers, onion skinning, and much more. What does the future look like for PixelCraft and web development in general? Can you see any particular trends?# Web development has been very underappreciated since application development became popular. People don't understand that web apps can do most of the things that don't need a native app. They also provide performance and security. Web development has been gaining popularity thanks to developers worldwide and is likely to become more prevalent in the future. What advice would you give to programmers getting into web development?# There is no better way to learn than to build things; when you create something unique and share it with the community, they appreciate it, and that motivates you to do even better. Start small, keep trying new things, you will master it. Who should I interview next?# I would suggest Anurag Hazra. He has been building great things and would be very insightful on this topic. Any last remarks?# The web is a great place and the easiest to share your creations with people. Building great stuff is never tricky. Find the thing that motivates you, and don't give up. Conclusion# Thanks for the interview, Abhishek! I hope you push your pixel editor further although it looks nice already. You can try PixelCraft online and find the source on GitHub.

Nullstack - Full-stack JavaScript Components - Interview with Christian Mortaro

If you look into what happened during the past few years in the world of JavaScript, you can see that component thinking made it to the mainstream. Even with this, there's still some kind of a boundary between the frontend and the backend. In this interview, we'll learn about Christian Mortaro's approach to the problem. Can you tell a bit about yourself?# Christian Mortaro I am a 28 years old Brazilian programmer, and I recently found out that I'm on the autism spectrum. I fell in love with code back when I was 11 years old as It was one of the few things that made sense to me since my social skills are pretty bad. I prefer to spend my days in front of my computer working and testing new libs for fun since I tend to have sensory overload when I go outside. How would you describe Nullstack to someone who has never heard of it?# Nullstack is a full-stack framework for building progressive web applications. It connects a stateful UI layer to specialized microservices in the same component using vanilla JavaScript. Nullstack components are regular JavaScript classes but with both the frontend and backend. I want the developer to have a full-stack application by default without dealing with all the decisions. Nullstack allows you to make your application work as fast as possible, but it is also flexible enough so you can refactor it into something beautiful. Consider the example below where a stateful component uses a server function to read from a database connection saved on the server context: import Nullstack from "nullstack"; class BookPage extends Nullstack { title = ""; description = ""; static async findBookBySlug({ database, slug }) { return await database .collection("books") .findOne({ slug }); } async initiate({ page, params }) { const book = await this.findBookBySlug({ slug: params.slug, }); if (book) { page.title = book.title; Object.assign(this, book); } else { page.status = 404; } } render() { return ( <section> <h1>{this.title}</h1> <div>{this.description}</div> </section> ); } } export default BookPage; In the example, Nullstack server-side renders and returns SEO ready HTML when the user enters the application from this route. When the user navigates to this page, an API call is made to an automatically generated micro-service that returns the book as JSON and updates the DOM. How does Nullstack work?# Nullstack generates two bundles: one for the server and one for the client with the least dependencies possible. The framework is responsible for deciding when to use an API call or using a local function; the programmer only needs to think about the behavior of their functions. Each environment has its context, which is a proxy passed to every function. The feature makes Nullstack a horizontal structure instead of a tree, which is very important for my daily job since I often have to move code around based on customer feedback, and I wouldn't want to be locked into a structure. In the example below, we are parsing the README only when the application starts and saving it in the server context memory: import Nullstack from "nullstack"; import { readFileSync } from "fs"; import { Remarkable } from "remarkable"; class About extends Nullstack { static async start(context) { const text = readFileSync("README.md", "utf-8"); const md = new Remarkable(); context.readme = md.render(text); } static async getReadme({ readme }) { return readme; } async initiate(context) { if (!context.readme) { context.readme = await this.getReadme(); } } render({ readme }) { return <article html={readme || ""} />; } } export default About; The client invokes a server function and saves the README content in the client context that is available offline on other views. Both readFileSync and remarkable are excluded from the client bundle. There are many optimizations in this code, but the component looks almost as simple as a basic one. How does Nullstack differ from other solutions?# The nice answer is that it was, since the beginning, thought of as a complete solution that uses the same concept to solve every problem. The approach makes Nullstack very easy to learn since picking up the first steps is enough to allow you to code full-stack. I used many more complicated stacks in the past, and you could always notice where things were glued together. The not so nice answer is that it doesn't differ that much from any other web framework. All of the options have the same goal, and eventually, one inspires the other. Nowadays, the market is trending towards a "one size fits all" approach where React is the solution for everything. If you think of frameworks as shoes, Nullstack is just a shoe that fits my size and makes me comfortable. Why did you develop Nullstack?# My friends and I were getting burned out of web development as it seemed like things didn't match our thought process. The first idea was to make an extension for React to make it look a bit more like Ember.js and add a server layer very similar to the server components they just announced. However, we got carried away and started modifying it so much that we eventually reset the project as its own thing. I wrote a class that would be "the ideal code for us" and reverse-engineered the idea until it worked. What next?# I'll keep developing my freelancing projects with Nullstack as I finally don't feel the need to change stacks at every project anymore. The work will result in more features being extracted into Nullstack as long as they follow the same principles. It's essential to me that Nullstack remains a single concept. Besides that, I will focus on creating content on Youtube both in English and Portuguese, so more people can understand it while I get the plus of developing my social skills. More people have the same barriers as me, and I hope to reach them, so they don't burn out of web development. What does the future look like for Nullstack and web development in general? Can you see any particular trends?# I can't tell what the future is, but I can tell you what I wish it were. I prefer a more decentralized web. For the last years, I've been passionate about PWAs since it removed the centralization of the stores. The next step I'd like to see decentralized is the frameworks, so developers can pick and choose a stack that makes them happy instead of looking good on the job market. What advice would you give to programmers getting into web development?# Test everything yourself, look inside the code, and don't merely use things because the community says so. Breaking stuff is the most fun part of developing, and there is no shame in figuring out what you like is not the most popular thing as long as you can deliver results. Who should I interview next?# Honestly, I have no idea. I lived in a "cave" for the last 28 years; I just gathered the courage to make a Twitter account. Any last remarks?# I want to thank everyone who gave me feedback and for the opportunity of this interview. Nullstack is almost two years old, and my poor communication skills and anxiety prevented me from showing it to people. I'm thrilled that none of the catastrophic scenarios I had in my head happened so far. Conclusion# Thanks for the interview, Christian! I find it refreshing that there's movement to have shared logic in the same files while having transparent optimizations in place. Perhaps the division between the frontend and the backend will become blurry over time. To learn more about Nullstack, head over to the project site. You can also find the project on GitHub. There's also a brief introduction to the topic on YouTube:

Preloading Web Assets

Preloading data is useful yet perhaps underused web development technique. When it comes to performance, the best work is the one you don't have to do. The second best is the one you can do ahead of time. Preloading achieves exactly this as it gives the browser a hint to begin loading and evaluating script as soon as possible. In my use case, I used preloading to speed up the worker architecture of my application by loading them immediately when the first view is shown and the user has to log in. Wrapping the idea of preloading as a script# Although I often use webpack in my projects, in this particular case I found it most useful to wrap the behavior as a small script to invoke after the main build has completed. Note that I use modulepreload as suggested by Jason Miller. To get to the point, consider the script below: const fs = require("fs"); const path = require("path"); const glob = require("glob"); // This script writes preload links for workers within the built // source to tell the browser that it should load them beforehand. // That avoids work later on. function writePreloads() { const cwd = process.cwd(); // The assumption here is that we're processing // build/index.html. const indexPath = path.join(cwd, "build", "index.html"); const indexFile = fs.readFileSync(indexPath, { encoding: "utf8", }); // In my case, I wanted to preload workers so I choose those // from the output and then add related links to the document // head. const workerFiles = glob.sync( path.join(cwd, "build", "static", "js", "*.worker.js") ); const modifiedIndexFile = indexFile.replace( "</head>", workerFiles .map((p) => path.relative(path.join(cwd, "build"), p)) .map( (p) => `<link rel="modulepreload" href="/${p}" />` ) .join("") + "</head>" ); fs.writeFileSync(indexPath, modifiedIndexFile); } if (require.main === module) { writePreloads(); } else { module.exports = writePreloads; } If you end up using the script, make sure to install glob as a dependency to your project. The code could be generalized further and I only wanted to give you a basic idea so you can adjust it to your needs. It's good to note that it's not idempotent by design so if you run it multiple times, it will add the tags multiple times as well. One way to avoid the problem would be to use something like cheerio to detect the case or rely on a pure JavaScript based check. For those interested, it's possible to wrap the behavior within a webpack plugin. I've written in detail how to do this in the plugins chapter of my webpack book. When to use?# Preloading/prefetching is a powerful technique and it can be useful in the following cases: When inspecting the application and especially its networking behavior, you realize some of the work should occur sooner. This can happen with workers you want to run as soon as possible for example. That's when preloading will come in handy. If you detect that the user intends to navigate to another page, you could prefetch it. Solutions like guess.js can be helpful here. You shouldn't use the techniques for each asset as that would defeat the point. Consider it as a means of prioritization of work. Conclusion# I hope you find the technique useful. At least in my use case it made a definite difference as we were able to move loading earlier in the process to avoid work later. Ivan Akulov has explained the difference between preloading, prefetching and other options in detail over at his blog.

CV Compiler - The Fastest Way to Improve Your CV - Interview with Andrew Stetsenko

When you apply for a new position or a business case, often people want to see your CV (Curriculum Vitae). Although it sounds simple to create one, it's far from it. Andrew Stetsenko has created a service that addresses the problem. Read on to learn more. Can you tell a bit about yourself?# Andrew Stetsenko I'm an HR-Tech entrepreneur with some coding background and a passion for machine learning (ML) and natural language processing (NLP). I started to learn coding in 2003 with C and Visual Basic and worked as a QA engineer in late 2010. This experience helped me to grow my expertise as a tech recruiter later. I'm not trying to fix recruiting, but do my best to match companies and software engineers better, so I founded a range of products like CV Compiler, Relocate.me, and GlossaryTech. I'm a big fan of long-distance swimming, coffee tasting, and traveling in my spare time. How would you describe CV Compiler to someone who has never heard of it?# CV Compiler is an instant resume-checker aimed at helping its users land more job interviews. There are many topics on the internet about robots—applicant tracking systems—that help rank your resumes and then reject you if it is missing keywords related to the job description. With our team and external NLP scientists, we spent quite some time researching this topic. Briefly, all that stuff about ATS-compliant resumes is only misleading people. I even wrote an article about it. ATS means applicant tracking systems, and generally, the term is used to describe machine-readable CVs. Our team followed a more scientific approach, and we built ML algorithms that assess resumes based on feedback from hundreds of recruiters/hiring managers and repeated mistakes in IT resumes. CV Compiler was launched at the end of 2018. Since then, we've received a great deal of positive feedback from software developers around the world. These developers say that our product helped them significantly increase the number of interviews they received and land their dream job. How does CV Compiler work?# After uploading a resume, CV Compiler runs over 30 "resume-checking" algorithms to find weak spots and suggests ways to upgrade. There's also a built-in tool that analyzes your resume profile based on the keywords it contains—e.g., a Java developer with DevOps skills—and gives a list of skills worth adding. We're also offering many helpful tools and content for the successful job hunt, from job search tactics and job search websites to cover letter templates and so on. Why did you develop CV Compiler?# Many job applications are being rejected without an interview because of the poor-quality, faceless resumes. Think about companies like Spotify or Amazon, which receive thousands of resumes. The experience running Relocate.me job board has also shown that resume standards vary from country to country. Moreover, there is no single solution that gives one-size-fits-all recommendations on making your tech resume competitive in the US, Europe, and Asia at the same time. Most of our resume improvement suggestions to the Relocate.me users were repeatable, meaning we had to explain fundamental resume tweaks repeatedly. Having many bad resumes from around the world, we first wrote a static Wiki on how to improve a tech resume but then decided to automate the solution to the problem of weak resumes, and that's how CV Compiler was born. How does CV Compiler differ from other solutions?# We often call our CV Compiler the fastest way to improve a tech resume. Instead of Googling lots, a person gets a list of proven suggestions within 60 seconds after uploading their resume. There are many online resume analysis tools, but these services are too generic, meaning multiple professionals can use them, and the results are mediocre and very general. In contrast, the CV Compiler is designed for tech professionals. We use the taxonomy of over 3k keywords and once in a quarter analyze thousands of job listings to determine the most in-demand tech skills for different developers—JavaScript devs, Ruby devs, Python devs, and the like. We're also working with various dev communities to share this content, which are the skills most tech employers are looking for today, with everyone — hat tip for contributing to one of our articles about marketable JavaScript skills. Can you describe the technical stack of CV Compiler? How does it all go together?# We're using Python-based library NLTK and spaCy for tokenization, lemmatization, and POS-tagging. Those are the building blocks of our core technology. The tool for keyword analysis in large data sets—resumes, job descriptions—is built upon a seq2seq model in TensorFlow. Using the REST API, our website and our B2B customers receive resume improvement suggestions in different formats—JSON, HTML, and PDF. Based on one of our products (GlossaryTech), we've collected the taxonomy of technical keywords—2,000+—and their relationships. We're also working on a React-based online resume builder that converts existing resume templates into editable ones. But it's still in progress. What next?# We're currently focused on working with coding boot camps—primarily in the US—and offer a tailored version of CV Compiler to boot camp grads. If you don't have experience in tech or even resume-writing experience, CV Compiler will be a mind-blowing solution to prepare your resume for your first IT job. Besides this, we're offering an API for Resume Review as a Service. Every recruiting platform or a job board can use our white-label solution to implement an automated resume review as part of their user acquisition strategy and give extra value to existing users. What does the future look like for CV Compiler and web development in general? Can you see any particular trends?# I'm a big fan of a seamless user experience when software forecasts your next question or your next user step. So, all the power of web development should make the user flow simpler, more accessible, and mobile-friendly. The last thing to mention is more accurately a standard, not a trend—data protection and web security. The focus means every software development and architecture should be grounded on keeping user data secured. They should also be unquestionably transparent concerning how software vendors are using our data. What advice would you give to programmers getting into web development?# Constant learning and dedication are vital in starting your path in web development. Be patient because the most significant challenge will be in finding your first job. However, once you are there and keep learning, your career will be streamlined, and you can achieve great results. Who should I interview next?# Davy Engone for sure; not only because he is a good friend of mine, but he is a passionate front-end developer who runs Hackages education platform for software engineers. Any last remarks?# I'm incredibly grateful for this interview invitation, and I can only wish your community growth and all the best. Conclusion# Thanks for the interview, Andrew! It feels like you've found a significant intersection between developers and companies. CV is the interface for recruiting, and it makes sense to put specific attention to it to help people find better jobs and companies to find better fits. See CV Compiler online to learn more.

"SurviveJS - Webpack 5" - Amazon Version is Available

Doing a paper release of a book comes with additional thrill as once you go paper, there's no going back. Any mistakes you might have in the book are going to remain until you make a new edition. It's for this reason that especially technical books maintain errata separately from the book to list known issues. At the same time, a paper release will force you to improve especially the way book is laid out as an author and to condense where sensible. Editing requires patience. The main point of this release is that now the book is available through Amazon and my hope is that the wider distribution will help to recoup some of the development cost of the book as it's fairly time intensive. I found that editing takes a good chunk of time especially with my current Leanpub based setup. If you aren't interested in what has changed, skip straight to the book. Overview of the situation# As mentioned in the previous release notes, a lot of work has gone into making the book take changes made within webpack 5 into account. In the current version, that's available through Amazon as well, I've taken care to go further. In addition to simplifying edits and removal of obsolete content, I've made chapters work standalone where possible so they work better in "random access" type of reading as it's a lot to assume that people have seen the earlier content especially if they access it through the web. As a result, I've trimmed the book further and compared to the previous paper edition, this one is more to the point while still packed with useful reference that complements the official documentation. Book Improvements - 3.0# The book has received numerous changes and it's not possible to list them all here. Instead, I've compiled a list of the most important ones: I took specific care to make sure that the formatting of the book is consistent with itself. I.e. filenames are shown as index.js and so on. It's a little thing but it counts. Where possible, I condensed the layout of the book so that it fits better and less space is wasted. When I started the process months ago, the book had around 360 pages or so and now it's closer to 300. Small changes add up. I've updated book links where possible and dropped resources that are obsolete by now. Given the JavaScript ecosystem moves fairly fast, it's common for packages to become outdated. I rewrote the Multiple Pages chapter in a simplified form that works standalone. It gets to the point without anything extra. The Web Workers chapter is using native support of webpack 5 by default now although it still mentions worker-loader as an option. Given the plugin I use for i18n doesn't work with webpack 5 anymore, I rewrote the Internationalization chapter based on code splitting while mentioning related solutions. You can find the book below: “SurviveJS — Webpack 5” - Free online edition “SurviveJS — Webpack 5” - Leanpub edition (digital) “SurviveJS — Webpack 5” - Amazon (paperback) “SurviveJS — Webpack 5” - Kindle (digital) A part of the income (around ~30%) goes to Tobias Koppers, the author of webpack. I support his work this way given mine builds on top of his. What next?# As the book is in a good shape to be ported to a course format, I'll put effort into that direction to see how it works out. It's not going to be easy but at the same time the restructuring made to the book will help. The book content feels stable to me now so I don't expect to make any major changes to it for a while. Most likely webpack 6 will require a set of changes again but I would expect them to be smaller unless the configuration format is going to be completely different. Conclusion# I hope you enjoy the 3.0 version of the book and find it useful in your work. Note that I'm active at the book Gitter channel if you want to bug me about webpack. You can also ask questions at my AmA.

Plasmic - The fast and fun visual builder for React - Interview with Yang Zhang

Developing user interfaces with React tends to require effort and often it's done at the level of code. What if we could create user interfaces in the browser itself? It's this question that we'll explore in this interview of Yang Zhang. Can you tell a bit about yourself?# Yang Zhang I'm a software engineer currently working full-time on Plasmic, which is a dream job for me - getting to build novel visual tools for users like myself. It's funny that I'm now deep in the front-end space because I came from a very backend-focused past life. I used to study distributed database systems in my Ph.D. days, and I worked in applied machine learning at my last startup (Infer). I also used to work at big companies like Google and Microsoft, and now I'm all about tiny startups. In any case, I did a lot of product engineering over the years, and so building surfaces that directly touch and empower end-users became my drug! How would you describe Plasmic to someone who has never heard of it?# It's a tool for visually building components and full pages for any web app codebase! You can use it merely as a no-code full-page builder that plugs into your codebase or use it to design low-level, complex, stateful components like autocompletes and date/time pickers. One use case we're especially interested in is freeing up developers from requests from marketing, design, content teams, etc. These collaborators can instead directly create the things they want to see without being blocked by developers. Developers can, in turn, work on higher leverage things than pixel-pushing. Plasmic is starting as a pretty technical tool since we've been focused on letting developers build low-level components. Still, we're actively making the service accessible to anyone. That way, it's as approachable as a Squarespace or Wix (but for your codebase). We're focused on making it scale well up and down this spectrum. What you build in Plasmic can be consumed flexibly. You can generate actual React code into your codebase, or you can consume it like CMS content over an API - and more. The central point of Plasmic is about integrating into your arbitrary real-world codebase. Plasmic User Interface How does Plasmic work?# It's a browser-based tool, so it's easy to jump in and start creating things on any platform. What you're manipulating is the "real thing," the existing web platform (DOM/CSS/etc.), and not (say) vectors in WebGL or another medium, so what you see is what you ship. You create your visual designs in Plasmic, and then: Use the Plasmic CLI tool to generate a library of presentational components into your codebase (into your local git repo), or Consume the content you designed via an API so that you can treat Plasmic more like a CMS (and not need to touch the codebase whenever the content is updated). All the logic/behavior (state bindings, event handlers, etc.) are all done from your code, the usual way. So Plasmic isn't trying to reinvent programming, make your code via a GUI, impose any particular routing/state management, etc. In the case of codegen, what Plasmic spits out for you is a library of presentational components. They take care of rendering what you designed in the visual tool. If relevant, they provide a flexible interface that lets you wire up any props you want to any element within the component. For instance, if you're making a simple "new post" form in Plasmic with an input and a button, you'll be provided the PlasmicNewPost component that handles all the styling/layout/tags for you, and you can wire up your real state and event handlers like so: function NewPost({ onAdd, ...rest }: NewPostProps) { const [content, setContent] = useState(""); const history = useHistory(); return ( <PlasmicNewPost {...rest} postContent={{ autoFocus: true, value: content, onChange: (e) => { setContent(e.target.value); }, }} postButton={{ onClick: () => { onAdd( createPost({ content, createdAt: new Date() }) ); history.push("/"); }, }} /> ); } It's a simple example. As things go much deeper, with components supporting different variations across different states, the composition of components, responsive design, and more. There's also a new feature within Plasmic called Plume, which generates the behavior of components for certain well-known component types. So you can create an arbitrary design for (say) a Slider component and then have that fully working and accessible without writing any code – undifferentiated work that is time-consuming and tricky to do yourself. You can trivially create a bespoke design system this way. Everything is powered by the excellent react-aria and react-stately libraries from Adobe. They go very deep on accessibility and take it further than most libraries I've seen. Many different ways to design a slider, one of the component types Plume supports How does Plasmic differ from other solutions?# I'll compare Plasmic with a few categories in terms of trade-offs. Vector design tools# These are drawing programs, great for exploratory mockups. Plasmic is for building the real thing. The trade-off is that Plasmic is more complicated since it lets you express all of the nuance and complexity that comes with production - maintainable abstractions, element semantics and accessibility, combinations of states, more complex layouts, and so on. Drag-and-drop site builders# These are focused on static websites and are closed platforms. Plasmic focuses on flexible integration into arbitrary codebases and complex environments. The trade-off is that having full control of a closed platform is a faster way to build a simple end-to-end solution. Plasmic is a better fit for complex projects with development teams or (for instance) JAMstack projects that unbundle into best-of-breed CMS, framework, hosting, CI, etc. Plasmic certainly isn't getting into the CI business, for example. Content management systems# You can use Plasmic-as-a-CMS - invoke the client in your site's codebase, and from then on; your content teams can start using Plasmic immediately. We don't try to replace any existing CMSes, and we're looking to build integrations with CMSes and other data sources so that you can directly design with the real data in the tool. (And then you can still consume the output via an API, so it's a bit like layering CMSes.) Design-to-code tools# These are one-shot code generators that let you copy/paste into your codebase - as soon as you add your logic, you diverge from the original generated code. Plasmic lets you iterate on the design and consume that continuously from your codebase. The trade-off is that one-shot generated code can look more familiar since you can generate what looks like clean, hand-written code. While Plasmic can also run in this mode and produce such code to generates, it's then consumed as a library of components. Why did you develop Plasmic?# Two reasons: Push the envelope on collaboration between developers and non-developers, such as designers and content teams. For instance, today, developers do the dirty work of recreating mockups manually using code, and designers are accountable for the developer's attention to detail on this grunge work. Once developers integrate Plasmic into their codebase, designers - and anyone else - can step in and directly edit what ships into production, and developers are freed up to do higher-leverage work. Wanting better developer tools. Chrome DevTools are probably the most-used similar tool, but it focuses on debugging and not on authoring/editing. We wanted to see how far we could rethink developer experience with this. On this front, Plasmic, with its focus on just the presentational layer, is just the beginning. What next?# Plasmic is a very young project, so we have a ton of work to do. But we are already starting to see production usage and fascinating use cases that we never anticipated, which is super exciting. The usage is across small and large companies. Our immediate focus is on fully supporting our budding community of developers and also designers. I also mentioned above that we want to make the editor more streamlined and approachable to non-technical creators. Another big project is code components - the ability to bring in your own existing React components into the editor. But we want to cater our roadmap to the use cases that we see from our early adopters. So try out Plasmic and tell us what you would like to see! We would love to start a dialogue. What does the future look like for Plasmic and web development in general? Can you see any particular trends?# For Plasmic itself, building interfaces like this is just the start. We want to take this further and empower folks to create things that are more dynamic and more end-to-end. Our north star is about dramatically simplifying and accelerating many more facets of building digital products. What advice would you give to programmers getting into web development?# Focus on the users, experience, and product goals you are ultimately serving. It's is one of the extraordinary things about front-end development generally, and its proximity to the user. (This proximity is what drew me to focus on the front-end in my career!) Who should I interview next?# I mentioned above that Plasmic's Plume component system leverages react-aria and react-stately. I would encourage you to chat with Devon Govett. He is the madman behind these projects and the Parcel bundler! Any last remarks?# We welcome you to check out Plasmic; we'd love to hear your thoughts! Conclusion# Thanks for the interview, Yang! I feel there's a lot of potential in tools that bridge design with development and it looks like you are headed to an amazing direction. To learn more about Plasmic, see the React Finland session below: You can also find Plasmic online and follow Plasmic on Twitter.

Multi-platform applications with JavaScript - Interview with Valentyn Poliskyi

Developing multi-platform applications is difficult. Thankfully the advent of JavaScript has made it a possibility for an increasing amount of developers. To learn more about the approach, I am interviewing Valentyn Poliskyi. Can you tell a bit about yourself?# Valentyn Poliskyi My name is Valentyn Poliskyi, and I'm a full-stack JavaScript developer. Throughout my career, I've been developing different types of applications, including front-end web development, REST, and mobile. About five years ago, I developed my first website for a local print house. Then, I switched to freelance for a while and tried myself in a couple of outsourcing and outstaffing IT companies. Now I'm growing professionally as a full-stack developer in one of the distributed teams at Daxx. How would you use JavaScript to develop applications for multiple platforms? Which technologies do you prefer to use?# JavaScript is the primary tool for client-side development as there are no other alternatives for this type of applications (including HTML and CSS as related technologies). React, Angular, Vue are highly popular front-end frameworks. Node.js is commonly used for server-side development. It has a software platform under the hood, based on a V8 engine that converts JS code to machine code and uses a libuv library for input/output operations (reading/writing files). If you're using JavaScript for developing mobile applications, you can choose one of several solutions — React Native, Ionic, PWA, or Cordova. All of these tools work differently. The good thing is that a user won't see any difference between a mobile application written with these technologies and the one written with a "native" programming language. My preferable stack is the following: Server-side development — Node.js (Express) Client-side development — React.js Mobile applications — React Native The main reason why I chose these technologies is a vast and robust community that applies these technologies. Based on this, there are many ready-made solutions and practices for solving different kinds of tasks. Especially in the most popular ones, the main issues have been solved already or are being resolved. What does multi-platform development look like in practice? Are there particular challenges?# Software development is primarily teamwork rather than working with technologies. If you're programming several or all parts of the JavaScript system, it hugely simplifies team communication and task distribution. If you're understaffed at some stage of the development, any team member will be able to take on an urgent task. This way your front-end team doesn't have to wait for the back-end team to deliver the change as both ends use similar technology. In the case of correctly constructed components, they can be applied in several applications at once. For example, it could be the logic for processing entities passed between applications. If developers are well aware of JavaScript vulnerabilities and peculiarities, they can easily prevent potential errors, improve performance, and strengthen weak areas. In the process of development, programmers may face the limitations of a particular tool using JavaScript since the language was initially used only for front-end development. In this case, the best solution would be to identify such problems and develop solutions during the system designing stage. What are the main options available for developing multi-platform applications?# Usually, if you need to develop a working system within a short time that solves a task of low or medium complexity, you should go with cross-platform applications. And if speed is the main requirement for the system, you should choose faster tools for a specific platform. Why did you choose JavaScript for developing your multi-platform applications?# I decided to choose JavaScript as I knew there were many available projects and a low threshold for entering this technology. Later on, I was assigned to develop new modules, which was one of my project requirements. I considered different technologies for server-side development and chose Node.js. The main reason for this was specific time frames and a team with strong expertise in this technology. The technical requirements on the project matched the capabilities of the Node.js platform as well. In a while, we had to develop a mobile application for the client. I chose to use React Native this time. The platform's capabilities did not limit the technical requirements, so we managed to build an up-and-running application for Android and iOS in a short time. What does the future look like for JavaScript and programming in general? Can you see any particular trends?# No doubt, user devices are getting more powerful day by day. We'll see computing and business logic transferring over to the client level as much as possible. On the server-side, we'll transition to distributed microservices and cloud computing. JavaScript is developing as a language and ECMAScript standard updates are released annually. Trends for the tools used on different platforms, based on JavaScript, go hand in hand with the directions for the "native" technologies for these platforms. What advice would you give to developers getting into programming?# The best advice I can give is to consider all possible options when solving different tasks and not limit your expertise to one specific domain. It will allow you to choose the optimal solution, considering the particular nature of various tools. JavaScript is the right choice at an early stage as it forgives mistakes and doesn't make you choose between approaches and a stack when solving tasks. As I mentioned before, there's a large number of different frameworks for front-end development. Any last remarks?# JavaScript is a universal problem-solving tool that is easy enough to learn. It is a perfect choice for beginners! However, a developer should be skillful enough to avoid pitfalls. A significant advantage of JavaScript is many libraries for platforms and a large community of programmers who support existing open-source libraries and develop new ones. If you're looking for a tool to develop a multi-platform application and technical requirements don't restrict the "native" tools usage, JavaScript is an excellent choice. Conclusion# Thanks for the interview Valentyn. It has been interesting to observe the impact of JavaScript on the way we develop applications over years and it seems it's becoming a better option year by year.

Rockpack - Skip config, code React - Interview with Sergey Aleksandrov

As you might know, configuring webpack isn't the most fun thing in the world and that's exactly the reason why I wrote a thick book about it. For many React users, either Create React App (CRA) or framework such as Next.js hides a lot of the complexity. In this interview, we'll learn about a new solution called Rockpack from its creator Sergey Aleksandrov. Can you tell a bit about yourself?# Sergey Aleksandrov Hi, my name is Sergey Aleksandrov, and I live in Kharkiv, Ukraine. Since 2010 I've become interested in web development. I started my journey with PHP and HTML markup, and after a year, switched to JavaScript. I am currently working as a full-stack developer. In my free time, I develop my project, Rockpack, and other open source tools. How would you describe Rockpack to someone who has never heard of it?# The main goal of Rockpack is to reduce project setup time from weeks to five minutes. Rockpack allows you to create a React project with a correctly configured webpack, ESLint, Jest, and related tools. You get a project with server-side rendering support, CSS (SCSS, LESS) Modules, @loadable, SVG as React Component, TypeScript or Babel, support for many file formats, performance optimizations, and so on. Rockpack provides powerful tools for logging and localization. How does Rockpack work?# Rockpack There are several cases where Rockpack is useful: Beginners - With the help of Rockpack, any newbie to React can deploy a project of any complexity in a few minutes. Applications can be either regular single page or with a project structure, server-side rendering, etc. Large projects from scratch - Rockpack supports most of the webpack best practices and scales to big projects. Startup - If you need to quickly check an idea without wasting time on unfolding and setting up the project. Library or React component - If you want to write a UMD library or React component, there's support for the ESM/CommonJS builds as well as the minifying. Legacy projects or modular use. - You can for example migrate a legacy application to SSR using Rockpack. How does Rockpack differ from other solutions?# Rockpack was built on three ideas: extensibility, modularity, and ease of use. Extensibility# Let's assume we have created an application with rockpack/starter and we want to extend the configuration. Instead of ejecting, like you would do in CRA, you can override any webpack property while still having the possibility to keep your configuration up to date. Modularity# If you have an existing application, but you need to add server-side rendering, you can add rockpack/ussr and rockpack/compiler to your project. Doing this converts your application for SSR. Ease of use# Rockpack doesn't decide for you how to design an application. You can use any solution for state management, use any libraries, make any project structure, design any architecture approach, and so on. Why did you develop Rockpack?# It took a long time to develop Rockpack. It all started in mid-2017 when my friend and I decided to create our product - Cleverbrush. It is an online vector photo editor and collage builder. It consists of 5 applications: Landing page Editor Collage Admin API Initially, we had to create our webpack.config for each application and monitor dependencies. The duplication meant we had to fix issues in five places and each time we added features, we had multiple places to modify. We endured it until we found customers. Customized versions for them required all the same actions, and the routine grew exponentially. Then I thought, what if I write the config once "correctly" to correspond to most types of projects, and we won't have to edit it every time, update dependencies, etc. To save money, we test through logging as described the Log Driven Development article. Rockpack proved to be excellent in solving our project's problems, and I thought that it might be useful not only for us but for the community in general. What next?# I want to cover the compiler module with end-to-end tests and add more articles to the official site and Q&A. Read the interview about end-to-end testing to learn more about the topic. What does the future look like for Rockpack and web development in general? Can you see any particular trends?# The long term goal is migration to webpack 5. The process will not be easy since many plugins haven't been adapted to the new version at the time of writing. Speaking about the overall state of web development, currently, I can claim it's relatively stable. The development of the language and frameworks is much smoother than before. We have reached a plateau of productivity, where we are ready to solve real problems. What advice would you give to programmers getting into web development?# Set achievable goals for yourself. Start with something simple, so that encourages and motivates you. One more thing that helps me is creating a plan for the day. I use a large whiteboard on the wall to create a todo list with tasks. Seeing real goals in front of you makes it harder not to achieve them. Do not be afraid of not knowing something and making mistakes. We are not robots. We are humans - and humans make mistakes, and that's okay. Who should I interview next?# Our Ukrainian community is very diverse and vibrant. But among all, my top three are: Illya Klymov is a Rockstar who makes tons of useful content and has invested titanic efforts in developing the Russian-speaking community. He is an easy-going, erudite person, communication with who will be very interesting. He has his own YouTube channel with unique content Vladimir Agafonkin is a creator of a trendy library Leaflet Yuri Artyukh - he also has YouTube channel. The creativity of this person impresses everyone! Any last remarks?# SurviveJS is a considerable contributor to the React community; thanks a lot for promoting projects like Rockpack! Conclusion# Thanks for the interview, Sergey! It's going to be interesting to see how the community adopts the project. I know CRA and Next.js are the standard starting points for many but at the same time there's still room for innovation and improvement. Learn how to get started fast with Rockpack. See also the projet on GitHub.

"SurviveJS - Webpack 5" - Further webpack 5 updates

Webpack 5 has been available for a couple of weeks by now. The previous release of the book covered majority of the required changes but I realized there's still more work and updates to be done as I perused through the official release post. You see the results of the additional work in this release of the book. If you aren't interested in what has changed, skip straight to the book. Overview of the situation# The current version of the book has been designed with webpack 5 in mind from the start and I've taken care to use plugins that support webpack 5. The ecosystem is still adapting to the new major release and I've done my share of related work in the packages I help to maintain to ensure effortless transition. During this work, I've managed to trim the book a bit more to prepare for a paper release. I may work further on the book to make the chapters work better as standalone to support "random access" type of reading and majority of the content works like this already although I'm sure it can be improved still. Book Improvements - 2.8# The book has received numerous changes and it's not possible to list them all here. Instead, I've compiled a list of the most important ones: I've taken care to replace webpack-cli specifing command line flags with configuration as the book is using webpack-nano, a light option, now. I've condensed and restructured the content at places to fit the pages better. The cover of the book has been updated to include "webpack 5" in it given that's the focus of the book. The Loading Images chapter has been updated to use asset modules, a new feature from webpack 5. For most use cases, file-loader and url-loader aren't required anymore. The same change applied to the Loading Fonts chapter as well. The Separating CSS chapter has become simpler thanks to the changes made to mini-css-extract-plugin. It's able to detect automatically now if hot module replacement is enabled or not. The Minifying chapter is using css-minimizer-webpack-plugin instead of optimize-css-assets-webpack-plugin as it's doing the same and supports webpack 5. The Performance chapter has been reworked with webpack 5 in mind. The Extending with Plugins chapter has been rewritten. I realized the testing approach I presented at the Composing Configuration chapter is a good way to develop plugins so I moved the boilerplate to the plugin chapter and then rewrote the examples against that while using webpack 5 specific APIs. The Searching with React example has been written in a more condensed form to fit the pages better. You can find the book below: “SurviveJS — Webpack 5” - Free online edition “SurviveJS — Webpack 5” - Leanpub edition (digital) “SurviveJS — Webpack 5” - Amazon (paperback) “SurviveJS — Webpack 5” - Kindle (digital) A part of the income (around ~30%) goes to Tobias Koppers, the author of webpack. I support his work this way given mine builds on top of his. What next?# I want to publish the book in a course format through a platform. For this to work, I'll need to take care each chapter works as a standalone so it's likely some of the changes from the work find their to the book as well. Depending on how this work goes, I'll decide whether to use the current version for the paper release or whether I'll integrate those changes to it. Conclusion# I hope you enjoy the webpack 5 feature update. Note that I'm active at the book Gitter channel if you want to bug me about webpack. You can also ask questions at my AmA.

End-to-end testing - Interview with Erik Fogg

Testing is topic that comes up often in software development as it's an important part of verifying what we did works. That said, it's a complex topic given there are so many ways to test. To understand end-to-end testing better, I am interviewing Erik Fogg from ProdPerfect. Can you tell a bit about yourself?# Erik Fogg I'm Erik Fogg, the Chief Operating Officer at ProdPerfect. My background is actually in process improvement and operations efficiency, which seems odd for a tech company. But I'm surrounded by lots of great engineers who are much smarter than myself, and a big part of my job is making sure that our customers can embed us into their processes in a way that makes developing software far more efficient. How would you describe end-to-end testing to someone who has never heard of it?# I have a favorite analogy: when you're going to send Curiosity to rove around Mars, you need to test the heck out of it. You test individual components such as the camera or the drive train in isolation; this is like a unit test, and you can do it cheaply and frequently, allowing you to iterate. You want to make sure the camera and drivetrain can talk to the onboard computer; this is integration testing, and it's a little more expensive. But you do not know if the rover will work on Mars until you put it all together and drive it around in a Mars-like environment. Above is the essence of end-to-end testing: you put together and launch the entire web application and send a simulated person to use it. During testing, the server and network and integrations and all the chaotic complexities with it are live. You use end-to-end testing to make sure your application works in the wild. It's expensive and time-consuming, so you can't do it nearly as frequently as your lower-level tests. How does end-to-end testing work?# The old school way of doing end-to-end testing is to launch the application on a test server and use it. We call this "manual testing." It's still done today, and sometimes that's the right way of doing it. In manual testing, you, as a human, follow several steps on the application, and you check to make sure you're getting the response you want. So you might buy a product and check out, and you want to make sure that the credit card was charged, that the shipping address is correct, etc. You've probably already figured out that you need to set up your test environment in some way to pretend it charged a credit card or shipped a product. All of that setup is necessary to make sure that you can test your application without (for example) buying a product every time. The other way of doing end-to-end testing is to automate it. It's here where you get Quality Assurance (QA) Automation Engineers getting involved. Instead of just using the application, QA Automation Engineers write scripts that send simulated users (think: bots) to use the application, and those scripts look for the kinds of responses (such as "credit card is charged" or "product is shipped"). So by writing these scripts, you can test an application more quickly, reliably, and cheaply than doing it manually. How does end-to-end testing differ from other solutions?# It's part of the whole picture. Martin Fowler has a great article on how end-to-end testing fits into a more extensive collection of testing methodologies that are necessary for a great software team to ship high-quality code quickly. The Mars Rover analogy works well for us here: you need to test the components of your code in different ways: Unit testing - Test each module of code to ensure it does what it's intended to do. For example: does your Sales Tax Calculator correctly apply a 6.25% sales tax when you enter Massachusetts as the state? API/Integration testing - Modern applications are architected as multiple semi-independent services that work together. API and integration testing makes sure these services can effectively interface with each other (this is a vast oversimplification, but if you're new to API testing, it gets the idea across). End-to-end testing - As explained above. All three of these are critical parts of functional testing. There are other important things to test in your application, including performance, load, UX, and accessibility. The pyramid largely encapsulates types of functional testing. Why did you develop ProdPerfect?# End-to-end testing is a mess in a lot of ways. It's expensive to build, it's challenging to maintain (you have to change tests when you change code), and it's unstable (some tests will flip between passing and failing even when the code has not changed). Many solutions attempt to make it a little easier to maintain end-to-end testing, but the biggest problem is otherwise going unaddressed: what tests should you write? You can't write tests for every possible workflow through your application: you'd have a test codebase bigger than the application codebase. So you're forced to prioritize: what do you write? Teams have different heuristics to decide what they will and won't test, but these are all forms of guessing. That guessing means critical tests are missed, and unnecessary tests are written. Bugs make it into production, and test suites are more extensive (and thus more expensive and longer-running) than they need to be. ProdPerfect analyzes product analytics data to build and maintain test code automatically. In practice, this means we let the users of our customers tell us what's important to test, just by using the application. We use this data to prioritize tests to make sure what users care about always works. As we can automatically build and maintain the tests, we can support test automation far more efficiently than humans. What's next?# For us, it's improving our machine learning systems to begin anticipating what new tests need to be developed even earlier in the software development lifecycle. It will be hugely valuable for our customers. After that, we want to port what we're doing over to mobile applications – which is a lot harder than it seems at first glance. What does the future look like for end to end testing and web development in general? Can you see any particular trends?# I think the software industry is in a bit of denial about Machine Learning and AI coming after everyone's jobs. Part of that is because we're building the AI – how could it get us?. Web development is becoming increasingly modular and repeatable. Testing is already there. I see ML and AI getting smart enough, pretty quickly, to build most web applications with a Product Manager at the steering wheel, without engineers writing code. Give it five years. What advice would you give to programmers getting into web development?# Well, based on the above: "develop other skills!". Slinging code is excellent, but we've forgotten why engineers exist: to solve problems. Become a great problem solver. Learn to analyze complex situations with limited data and competing priorities. You'll have incredible job security if you can do this. Who should I interview next?# Jason Arbon of Test.ai. Guy's a genius; he and I have highly complementary perspectives on testing and software development. Any last remarks?# "Don't fear the machine". By the time ML or AI can do something, it's already stopped being a creative human task that stimulates the mind and helps you learn and flourish. We should always be moving forward to the new frontiers of where human creativity is at its best. I have a post of my own that expands on this. Conclusion# Thanks for the interview, Erik! I feel end-to-end testing is often underappreciated technique and I hope developers see more value in it in the future. You can learn more about ProdPerfect online.

Algolia with Netlify - Easy search for static sites - Interview with Samuel Bodin

Setting up and maintaining a search on a static site is often a chore. You could handle indexing yourself using a solution like Lunr but even with that you'll encounter limitations as the size of the index grows. That's where services like Algolia come in. To make it easier to integrate with static sites hosted on top of Netlify, Samuel Bodin and his team developed a specific solution that's easy to integrate. Can you tell a bit about yourself?# Samuel Bodin Hi, my name is Samuel Bodin, and I'm a Senior Software Engineer at Algolia. More specifically, I work on our crawler team, which operates an in-house robot that indexes our customers' websites. I have studied art, cinema, and photography before changing my path and starting a web career. I have gained experience as a PHP developer and currently use mostly Node. My current work includes developing our JavaScript crawler, the backend, the frontend, and maintaining our infrastructure. How would you describe Algolia and Netlify for someone who has never heard of them?# Algolia is a distributed search engine with a high-speed and simple API that will ingest any JSON record, index it, and allows you to search inside records without any strong technical requirement. Netlify is an automatic website builder and host that easily plugs into your Git repositories and handles all the work required to host a website in a fast and secure way. Algolia and Netlify share the common goal: to alleviate technical requirements and solve what were once complicated tasks into a few clicks or API calls. How does Algolia work with Netlify?# Algolia can be used directly with our API clients available for a dozen languages. However, it requires every developer to structure their data and push it when it changes. We developed a small plugin that integrates into Netlify and uses our internal crawler to navigate and extract information directly from any website to help with this task. The plugin will automatically ping the crawler when the user deploys its website in Netlify and populate an Algolia index with all the data we found in a matter of minutes. All of this makes the task of indexing very easy and maintenance-free. You can learn more at the launch video below: How is using Algolia with Netlify differ from other solutions?# With a traditional setup, you will need to code and maintain your indexing pipeline. It requires a lot of knowledge in various fields, like Algolia API, concurrency indexing, filesystem read or efficient database usage, etc. In the case of Netlify, which has no backend, that usually means you will need to do the following: Fully generate your website statically Read the filesystem Look for HTML files Find the relevant information inside each file Structure your JSON records Push them to Algolia In comparison, the Algolia plugin for Netlify only requires you to connect your Netlify account to Algolia, install the plugin via our UI, and you are set with nothing to code or maintain. Why did you develop the Algolia plugin for Netlify?# We wanted to provide an effortless way for developers to use Algolia. Even if our API is relatively straightforward, we understand that it is not a trivial thing to implement correctly. Netlify has a great environment, and it was a tremendous challenge to integrate and make everything feel as seamless as their User Experience. Besides, our goals are very similar, and it feels like the ecosystem will work great together. Algolia installed on Netlify What next?# Launching the plugin was a success, but we don't plan to stop the development. To start, we have provided the most simple plugin as possible to focus on the simplicity of usage, but we know that not every use case will fit into our default strategy. We want to have the maximum feedback possible from Netlify's customers so that our users' needs shape the roadmap. That's why we have opened our Github Issues and a dedicated Discourse forum. What does the future look like for your crawler and web development in general? Can you see any particular trends?# Our crawler is an enterprise tool that lives on top of Algolia's core search engine. That means that it will, as a project, benefit from the full power of Algolia and the feedback from big customers. With this in mind, we can provide for a powerful subset of features to every Netlify user. We are trying to improve and automate more of our data extraction strategies. We want to handle more content types and websites, such as JavaScript-powered websites, password-protected websites, complex documents like PDFs, etc. In general, we see a good trend around serverless. That's why we believe in Netlify, and it this plugin. People rely more and more on SaaS/PaaS to handle their workload and focus development hours on their core product. What advice would you give to programmers getting into web development?# Start small, ship fast, break things, but not too much. Be kind to people, and be humble. We never stop making mistakes, so never be afraid of making them but always understand why. Web development is a field that is changing fast; you don't have to know everything; no one can. What matters is how you handle new challenges every day and how you stay open to new ideas. Who should I interview next?# I am very grateful to work with amazing people, and there is almost no day since I started that I have not been impressed with Sarah Dayan's work. She has a lot to share. Conclusion# Thanks for the interview, Samuel! It seems you've developed a nice solution for anyone hosting their site on Netlify in particular and I've found Algolia useful on this site as well. To learn more, check Algolia's Netlify plugin on GitHub.

Ukrainian developer market - Interview with Eliza Kravchenko

I've encountered many Ukrainian developers during my career and even visited the country a couple of times speaking at conferences. During each trip, it was surprising to me how vibrant the developer community is there. To learn about Ukrainian developer market, I am interviewing Eliza Kravchenko from Mobilunity, a local sourcing company. Can you tell a bit about yourself?# Eliza Kravchenko My name is Eliza Kravchenko, and I am the lead of the resource management team in Mobilunity. The position links the company, clients, and developers. My career path started seven years ago within the customer support department. I gained experience working as a shift supervisor, a VIP customer retention manager, and an internal trainer for newcomers during these years. Having diverse experience working with clients, I arrived in a role that utilizes all the skills and knowledge I have gained during my career. How would you describe the developer market at the moment?# In recent years the Ukrainian IT market has significantly improved its position in the world market of IT services, so the number of professionals employed in the industry is continuously growing. I even see it with my friends and acquaintances who are more and more interested in software development and are considering a career change and transition to IT from various other areas (e.g., advertising, sales, SEO, etc.). Many professionals are now in the early stages of gaining their experience. As the industry grows, within 2-3 years, juniors will get enough knowledge and rise to the middle level and sometimes even above, which is undoubtedly a great opportunity. At the moment, Ukraine is the second-largest talent pool in Eastern Europe by the number of software developers. As the number of specialists grows, we have high chances to become number one very soon! How does a developer's salary get determined? What factors affect it?# The developer's salary is determined based on the market. The following factors affect this: Years of experience Knowledge of technology and the depth of this knowledge. Domain knowledge that may be a defining factor for the interest in some candidate The quality of the resume/CV and thus, the ability for the candidate to present/sell their experience. Foreign languages and experience in working with foreign customers Managerial/leadership experience. Much depends on the technologies that a person works with, and soft skills are also essential - the ability of an engineer to look at the business side of a technical problem may seriously increase demand in the market. All these criteria can be used to calculate the market value of the profile. We recommend not only to estimate developers based on buzzwords and seniority but also to pay attention to the problems they have solved. The so-called “portfolio of solved challenges” sometimes could be very much impressive, while at the same time, the person might have only a few years of experience. What types of developer compensations and benefits are common in the industry?# The salary is usually tied to USD or EUR currency mainly because our IT industry is export-oriented. There is a base salary, and also bonuses depending on the project stages. Intangible motivation most often includes insurance, English classes (courses) for export-oriented companies, additional compensations for training, conferences, etc. Product companies now consider giving some stock options incentives, but this is quite rare and a very undeveloped initiative yet, nevertheless one is growing in its popularity quite fast. How have developer salaries progressed over the years?# In general, salaries for different seniority (Junior, Middle, Senior) have remained almost at the same level, meaning that within a specific seniority level and technology stack, we do not see significant differences in expectations for the pay over the past years. Simultaneously, a dotted increase in salaries is possible in cases where specific developers, in addition to technical skills, acquire deep domain expertise and become more in demand for niche industries and technologies. For an average person, while transitioning from junior to middle and senior levels, the experience can increase the salary significantly during each step. With this being said, the developer’s salary grows in the first four years of one’s career and remains more stable at the senior level. After that, everything begins to depend on how well the person can sell themselves and how they have mastered and enhanced technical and managerial skills. How do you expect developer salaries to progress in the future? Can you see any particular trends?# As the demand for software development services grows globally, the number of technology-related vacancies is also increasing in Ukraine. The salaries form accordingly based on supply and demand. Fields, such as artificial intelligence, big data, machine learning, and robotization, may encounter enhanced salaries as more of these companies are discovering Ukraine. As the companies working in these industries understand the Ukrainian market better and leverage it for talent, the demand may exceed even the high rates of our IT industry growth. What advice would you give to programmers getting into web development?# English skills are fundamental. While starting to get into software development, please do not forget to learn foreign languages. It is also essential to always keep yourself in technical shape and continuously learn new things. A person who has stopped in their development quickly becomes not in huge demand in the market. Mobilunity’s clients value above all others its dedication of Ukrainian professionals to business success. So, the Ukrainian tech talent scale, accessibility, high level of education, ease of communication, and commitment to work and success make the world attracted to what Ukraine has to offer. Who should I interview next?# I would suggest talking to a good friend of mine, Swiss-based Philippe Lautier. He has led local and remote engineering teams for small companies and large corporations in four different countries for the last 22 years. He is an outstanding manager strong both in driving for business goals and setting high vision & ambitions for his people. Any last remarks?# The Ukrainian IT market is growing at a 20-30% rate annually and continues to be one of the most quickly developing and dynamic segments of our country's economy. We expect that Ukraine will further strengthen its status as a hub in the international arena. We at Mobilunity are always looking for partners in staffing smart and dedicated IT talents here in Kyiv as we love what we do. :) Conclusion# Thanks for the interview Eliza! It was interesting to hear your views on the Ukrainian developer market. I've seen the same as IT has become a lucrative career path for many and it's likely a good time to be in the field as it keeps expanding.

Eleventy - A simpler static site generator - Interview with Jeremias Menichelli

Static site generation is a topic that is becoming increasingly popular due to the rise of JAMStack. Instead of maintaining a server, the idea is to generate static markup for the entire site. Depending on the use case, this can be a great fit that avoids the trouble of hosting. To learn more about a modern, upcoming solution, I am interviewing Jeremias Menichelli about Eleventy. Can you tell a bit about yourself?# Jeremias Menichelli I still remember the morning I biked to my uncle's place because it was probably one of the first to have internet access at home. I remember writing down a URL, hitting the ENTER key, and waiting 20 minutes to see nothing. I had such a bad first experience in the web when I was something like ten years old, that I told myself one day I will have a little tiny small place in the web industry, and I'll make sure what I work in doesn't create that sort of frustration to users. That's probably the main reason why I'm so into progressive enhancement, static site generators, and accessibility now as a web developer. You can read more about my complete journey at my website. How would you describe Eleventy to someone who has never heard of it?# Eleventy is a static site generator that took all the lessons learned from the ones we had since a decade ago and applied them now. I loved Jekyll, but it was too slow even for not-so-big sites; other people love Hugo because it's fast, but as soon as you need to customize something, you have to touch the Go programming language. Often front-end developers aren't used to it, so that's a definite obstacle. Eleventy is a high-speed alternative and super close in features with Jekyll, and as soon as you have to extend it, you have JavaScript on your side. I found myself migrating my old Jekyll site, and not only was it fast but as soon as I found something was missing, it took me minutes to write a filter or a plugin to unblock me. Starting with Eleventy is as easy as relying on its default and creating an index.md file and starting Eleventy's server. echo "# Hello World" > index.md npx @11ty/eleventy --serve With that, you have a static generator pipeline and server in place. The next natural move is to begin with layouts and collections if you want blog posts or notes. How does Eleventy work?# If you are familiar with Jekyll, I would say it's pretty much the same. You have a base folder, and Eleventy turns anything with the extension you marked on your config as a page. The main difference is that Jekyll uses an underscore to mark directories like _posts to crawl. For example, Eleventy grabs anything with the .md extension. That means you have to configure some collections you want to iterate manually, but it's better because it gives you more freedom. Also, it's template engine agnostic, you can use Nunjacks or Liquid, or both, even template literals and do asynchronous data injection at build time. It's like a powerhouse! How does Eleventy differ from other solutions?# I think that Eleventy is less opinionated than other generators. Still, it has nice defaults that you can change later if you want, the chance to go as complex as you need is ideal for developer experience. Why do you develop with Eleventy?# The main thing for me is that it's built in the same ecosystem I work. For Jekyll, I had to deal with all the setup of a Ruby environment, which I'm not familiar with and probably don't want to deal with (not bashing the Ruby ecosystem, I'm just not used to it and I don't need to). But Eleventy runs in Node, which I already use for my work and feel comfortable debugging. What does the future look like for Eleventy and web development in general? Can you see any particular trends?# I don't know if I see a trend or I wish it becomes a trend, but it seems part of the web developers are becoming more aware we have a lot of different and more optimal options around tech stack for specific projects. I understand why companies go for widely used frameworks; onboarding new people to a custom stack takes a lot of time (and money, I guess). Building on top of a broadly used and well-tested ecosystem gives a feeling of having a more stable foundation. Still, we need to see that not all cases fit the default technology. There's a super good talk about game development called "Marvel's Spider-Man: A Technical Postmortem" by Elan Ruskin that's super exciting. As it became popular, you might have seen it already. In the talk, he says many times, fit technology to context. I feel we are trying to do the opposite with frameworks and try to match our technology context. Because we as developers need to pay bills and feed a family too as everybody, we need to learn these tools to get a job, so I don't see things changing in the short or midterm. Still, I see more people trying new things and going with other options, like Eleventy, to start a new project. We do have a lot of options to begin now, which is a good thing. What advice would you give to programmers getting into web development?# Often more junior people reach out to me with career questions. They usually ask me if they need to write articles or give talks to be a web developer. What I tell to each of them is just, no. Writing good articles takes a lot of time and energy, the same as preparing and delivering a really good talk, so you have to enjoy the process of making them. Unless you are into writing or public speaking, explore, experiment, and enjoy your work. Focus first on learning what you need to be useful to the company paying you your salary, and helping the team work every day with you; the rest of us come after that. Who should I interview next?# Cassie Evans or Rachel Nabors. Any last remarks?# Let's remember that though our job is vital in our lives, the most important thing and what defines us more is the impact we have on the people around us and the place we live in, so whatever we choose to do, let's be kind with people and with our planet. Conclusion# Thanks for the interview, Jeremias! Eleventy definitely seems like a nice step ahead particularly for Node developers. I remember using Jekyll in the past and experiencing similar issues with it. To learn more, head to Eleventy site.

How to learn effectively as a developer

I recently received an interesting question from a reader that changed his career path and headed to technology. He mentioned he was getting overwhelmed by the amount of information related to web development and computer science. He was asking my advice on how to cope with the situation. Given it's a topic that has value beyond the reader, I go through the main points of my response in this post while expanded on several ideas. Learning is about making sense of things# My background is fortunate as I was exposed to many technologies and ideas during my time at the university while I was completing my Master of Science (MSc) degree. A lot I learned during that time is still relevant today, and I would say the most significant benefit of education is that it's giving you some idea of what you don't know. When you need to learn more, you then know terms to use for searching. Essentially learning is about making sense of things. As you encounter information, you'll frame it with other things you've known before. For this reason, people say learning your first programming language is the hardest, and the next one is more comfortable. The same goes for languages. If you know English, it's likely straightforward for you to learn German as it's a language related to it. Get immersed in the topic# During the past few years, I've been learning German casually by spending some time with mobile applications each day. Given I live in a German-speaking country, I am immersed in the language as well. Although I'm far from fluent in the language, I've developed my muscles in German, almost without noticing. Becoming immersed in a topic you want to learn is an excellent way to pick up things as you'll become aware of patterns and become more aware of the terms that people are using. Learn fundamentals and patterns over specifics# Fundamentals are less likely to change than language specifics. Often more significant ideas translate from a language to another even if the language doesn't embrace them in design. Patterns that we learn give solutions to problems that have already been solved. Developing your collection and instinct for dealing with them is an integral part of becoming better at development. As a web developer, you'll benefit immensely by understanding how the web browser renders data and what technologies it's using. Having a good understanding of the vital standards - CSS, HTML, and JavaScript - is valuable no matter which libraries or frameworks you use for development. Track development news to spot trends# Keep an eye on the development news to spot prominent patterns and directions. It's good to see if there are commonalities, and often you can observe that there's a lot of excitement around a specific topic. You can try to figure out where in the hype cycle given technology fits to see if it's in the early phase or becoming something more stable. Keep one foot on the ground# When developing new projects, start at least some technology you know well and add new technologies on top. Keeping one foot on the ground like this is an excellent way to avoid becoming frustrated when trying to learn too much at once. In the beginning, starting simple and expanding from there is an excellent way to go. The more moving parts you have, the more there is to understand, and the harder it gets. Learn around specific themes# Focus on learning around specific themes. For example, two years ago, I focused on GraphQL; last year was more about design systems. This year seems to go into the direction of Deno. When you focus your attention on a topic, you'll become more aware of any news or articles related to it, and you'll automatically learn more about it. Work on projects you find motivating# Work on projects that you find motivating even if it's about learning. It's good to develop projects to learn, but it's even better when you are solving concrete problems. I've found working on prototypes aside from client work gives me more freedom to explore the boundaries of what's possible. Free exploration is a great way to develop skills, especially if you have a good problem or set of issues in mind. It's the perfect excuse to try out the technology you might otherwise not be able to test. For example, in a recent project, I replaced a traditional webpack setup with one I built around Deno, and doing this, I learned a lot. Track what you learn in a system# Track what you learn in a system so you can refer to it later. I capture information to flat Markdown files that are pushed automatically to the cloud. There are more sophisticated systems, such as Foam or Memex that exist within the space. Adopt a personal management system# Adopt a personal management system to organize what to do and when. I use OmniFocus, but any free alternative or your system can work. The point is to have a place where you can push ideas from your mind, not to forget. The broader approach is called Getting Things Done. Learn from the masters# As you gain skills, you'll learn to appreciate other people who know more about the topic. There's a reason why sometimes specific music artists are called as the musician's musicians. Those are the people that might not be known by the wider public, but that are true masters of whatever they are doing. When I was learning drawing and painting, I used to copy the masters' works to figure out what they did and how. I didn't replicate the technique, but I focused on reproducing the result. Doing these exercises taught me to understand better what made the pictures I was copying to work while sharpening my drawing and painting skills. Over time that became something natural that stuck. Read code from other developers# There's value in reading code by others and understanding why it works and why it's written that way for someone wanting to become a developer. You'll notice that there are different styles and approaches, even within the same language. Contribute to open source projects# One of my first touches with serious programming had to do with Blender, a 3D suite, as I learned how to modify the application's user interface to my liking. Although it started with simple changes, I gained a lot of knowledge about open source, 3D, and programming during the time I spent with the project. Contributing can be a great way to develop your skills and connections. Sometimes you may notice that a project you need doesn't exist yet. That's why I developed projects such as webpack-merge or Sidewind and many others. Although you may be addressing your own need, sometimes it happens other people have the same problem, and they'll contribute and may even be happy to continue the projects as you move on to other work. Reflect on what you have learned# It takes a consistent effort to become better, but at the same time, it's rewarding to see how you gain skills and how much you've improved over the years. That way, reflection has incredible value as it can show much you've achieved despite the obstacles you may have had to face. Conclusion# I hope the post gave you some food for thought. Learning is a skill by itself, and likely you'll have to find the ways that fit you the best. As I'm curious about your learning hacks, feel free to leave comments below the post.

Pipcook - Bridging JavaScript with Python for machine learning - Interview with Wenhe Li

There's a lot of excitement about machine learning and its applications. The question is, what can you do with and where to apply the technique and how. To learn more, I am interviewing Wenhe (Eric) Li, the creator of Pipcook. Can you tell a bit about yourself?# Hi folks, this is Wenhe (Eric) Li, and I am currently an SDE at Alibaba Inc. My works involve combining front-end development, front-end developer, and artificial intelligence (AI). One of my tasks here is developing Pipcook, an open-sourced machine learning (ML), and deep learning (DL) framework designed for front-end developers. How would you describe Pipcook to someone who has never heard of it?# Pipcook is a tool that helps you develop, train, and deploy an ML/DL model without much prior knowledge. The whole workflow is highly abstract without losing scalability. It lets you use popular Python-based machine learning solutions, such as NumPy, scipy-learn, jieba, and TensorFlow, easily through its interface. How does Pipcook work?# Pipcook wraps Python using BOA# Since this framework is front-end and Node.js developer-oriented, and most DL/ML libraries have been written using Python, we created BOA to bridge the languages. BOA allows us to directly import and call Python modules and methods in JavaScript. In this way, we can utilize the DL/ML ecosystem in Python without worrying about learning a new language. Pipcook leverages the concept of pipelines# We introduce a concept of pipeline which contains dataCollect, dataProcess, dataAccess, datasetProcess, modelDefine, modelTrain, and modelEva. Pipelines offer an abstraction over a typical DL/ML model lifecycle. Pipcook developers, including the community, offer the most common implementation of these parts (we call them plugins). People who want to train their model can use an existing pipeline or combine plugins to make their pipeline just like playing with legos. How does Pipcook differ from other solutions?# Pipcook lets you use both JavaScript and Python# Since Pipcook allows you to write Dl/ML models in JavaScript with Python modules, we can benefit from the great libraries and packages in the two ecosystems. It's incredible as you can decide to put some IO-oriented jobs to Node.js and put more DL/ML training-related work under Python. Doing this allows you to get the most out of both. Pipcook uses pipelines and plugins# Pipcook introduces pipeline and plugins to the DL/ML workflow. Doing this decouples the complexity of developing ML/DL models and makes the plugins highly shareable. Pipcook uses state-of-art techiques# Since Pipcook is an experimental project, we can use state-of-the-art techniques and languages to develop our project. That means using Rust, WASM, WASI, WebGPU, and more. Why did you develop Pipcook?# I love JavaScript and its magical syntax. However, I have to use Python to develop DL/ML models due to the abundant Python modules and ecosystem. Pipcook gives me a new way to establish DL/ML models in JavaScript without losing the Python ecosystem. So far, we've seen a clear tendency that AI comes into every corner of the world. And in the field of front-end, we still do not have an industrial level framework. Most DL/ML frameworks are still serving people who have related knowledge. We want to deliver such a framework that could be widely used by the JavaScript (Node.js and browser) world without worrying about the complex theory behind models. We believe this is the right way of passing the value of AI to the world. What next?# We just formally released our Pipcook a couple of months ago. This very first public release offers users an out-of-the-box feature of training a model for image classification, style transfer, and text analysis without much prior knowledge. Therefore, for the current stage, we are working for the user experience and developer experience. We are trying to optimize the training efficiency and mitigate the learning curve of developing a plugin. Apart from that, we are building an all-in-one toolkit, which includes viewing training log, inspecting, and visualizing model structure, and model pruning and compression. What does the future look like for Pipcook and web development in general? Can you see any particular trends?# In the future, ML/DL must have a stronger binding with the web in general. Distributed training, federal learning, and on-device inference will fourish in the web since all of them match the essence of the internet. Quoted from Tim Berners-Lee, "Let's redistribute power to individuals!" We are trying to build a paradigm that ML/DL could serve and be open to everyone with web development ability and JavaScript and Python ecosystems. What advice would you give to programmers getting into web development?# The web is essentially a community based on the idea of sharing, connecting, and open-source. Thus, try to connect, join, and work with the open-source community you are using or find interesting! Who should I interview next?# You could interview developers who have made their very first open-source contributions. Any last remarks?# Thanks for the interview and this chance to share my journey along with Pipcook. Conclusion# Thanks, Wenhe! I find it admirable what you are doing with Pipcook. I believe it can work as a bridge for JavaScript developers to the world of machine learning without having to delve deep into Python-based solutions. To learn more, see Pipcook site and Pipcook on Github.

NoCode programming - Doing more with less code - Interview with Alex Moldovan

What if you could create programs without coding? If you've ever used something like Microsoft Excel or Google Sheets, you've already done this at one level. NoCode programming is an emerging topic. To learn more about it, I am interviewing Alex Moldovan. Can you tell a bit about yourself?# Alex Moldovan My name is Alex, and I'm a software engineer based in Cluj, Romania. I work at a startup called teleportHQ, and I'm one of the founders of JSHeroes, one of the biggest JS communities in Europe. I've been in tech for more than ten years now, and in the past four years, I dedicated a lot of time to organizing events and public speaking alongside my full-time job. Lately, I started advocating for performance, accessibility, and ethical design. At teleportHQ, I'm fortunate enough to practice what I preach, as we're working at a product that can bring the NoCode revolution one step closer. How would you describe NoCode development to someone who has never heard of it?# There's no clear definition for NoCode development. It is a generic term for any platform/tool that abstracts away parts of the code you would typically write when building an app. Tools like Wix and Webflow are NoCode tools for frontend development, but modern serverless solutions or backend-as-a-service platforms are also NoCode, just for backend development. How does NoCode programming work?# One thing which is crucial and often misunderstood is that NoCode is still development. It replaces the need to write code instructions with visual interfaces and configuration dashboards. Your final application still runs on the same platform and pretty much using the same programming languages at the end of the day. But the process of building that application can be simplified a lot. Additionally, a NoCode platform might have different ejection mechanisms, i.e., points after which you are on your own, and you continue developing on top of the skeleton they offer. The approach is often called LowCode development. In this case, you will have code generators that are creating the initial project and the UI/business/backend code that you can later modify or extend. I see NoCode as the next layer of abstraction on top of existing frameworks and programming languages. You can think of it as the step we took from Assembly to C or from pure HTML/CSS/JS to modern frameworks. With each step, it arguably became more comfortable to develop a more complex project, reusing the solutions others came up with for solving recurring problems (e.g., for the web platform: dom manipulation, reactivity, data flows, etc.). Why did you join teleportHQ?# I've been with the team for the past two years. During this time, I have worked both on the code generators and the playground. I feel privileged to have been in the open source world all this time and fine-tuning the user experience on the playground. Both areas align with my personal goals and interests. One thing which made me interested in the company was the desire to open-source the code generators and the OpenUIDL (our format for representing UIs in a JSON structure). Creating in teleportHQ What is teleportHQ?# We are building a NoCode/LowCode platform (we call it the playground) for professional use cases. We target developers, designers, and content creators that want to streamline their UI creation workflows. Our "secret" sauce is a set of open source code generators with which you can export your work from our platform at any time and continue on your own in any of the most popular frontend frameworks today (e.g., React, Vue, Angular, etc.). Theming in teleportHQ How does teleportHQ differ from other solutions?# There are plenty of tools and solutions for building static-ish websites, so we are working more towards application development and integration with existing workflows. Our long term vision is to bend technology towards humans, and we all strive to create a platform that makes it hard or impossible for content creators to offer bad experiences to their end-users. We want to bake in all the knowledge that the dev community has so that future users can rely on a solid bedrock when building applications, without even knowing it. Editing in teleportHQ What next?# We're still in beta at this point at play.teleporthq.io, but we have a massive roadmap ahead of us. Our main priority now is to bring all the features needed for our users to start working professionally and integrate the many experiments from our R&D team. We would also love to engage with the community on our open source code generators and to get as much feedback on the tool as possible in the coming months. What does the future look like for NoCode and web development in general? Can you see any particular trends?# There's a growing need for NoCode solutions, and it will only increase in the future because the demand for software in general is growing exponentially. I'm hoping that with the rise of NoCode tools for UIs, the entry barrier will become lower, and the web platform more accessible to people. There are many things to consider when building a website from scratch, so I would love to see concerns like performance and accessibility offloaded to these tools. That way, developers can focus on the functionality and business side of things. What advice would you give to programmers getting into web development?# I think they should embrace NoCode tools when the time comes, without fearing that the tools and automation will replace the need for their skills. Knowing the basics very well will allow you to switch from platform to platform or from framework to framework, without paying the cost of transition and re-learning everything from scratch. Who should I interview next?# Jeremias Menichelli. Conclusion# Thanks for the interview, Alex! I find the idea of LowCode/NoCode highly interesting and I like your approach of using an intermediate UI definition to tackle the differences between platforms. It reminds me of programming languages which use an intermediate format during their compilation. You can find teleportHQ online. See also their GitHub and try their playground.

ExtendsClass - Online Tools for Developers - Interview with Cyril Bois

One of the great things about being a developer is that you can literally create your own tools to make it easier to do your work. Cyril Bois has been developing a set of tools at ExtendsClass for a while and now we have a good chance to learn more. Can you tell a bit about yourself?# Cyril Bois My name is Cyril Bois, a French developer passionate about coding. I have been working for an independent software vendor specialized in Retail for 14 years (I am faithful!). During my free time (I have a wife and three boys), I work on my platform ExtendsClass. How would you describe ExtendsClass to someone who has never heard of it?# ExtendsClass is an online toolbox for frontend and backend developers. You can find quite a wide variety of tools: Playgrounds Code checkers Random data generators Comparators Converters Formatters It's a bit of a Swiss Army knife for developers! The service can save time for doing quick tests without having to install any software on your computer. How does ExtendsClass work?# Using ExtendsClass is simple. Just use a browser! For some tools, HTTP APIs are available, and you should make HTTP calls against them. In specific cases, you have to create an account to obtain an API key to avoid abuse. How does ExtendsClass differ from other solutions?# The main difference is the variety of tools available. As soon as people ask me, I make the source code of specific tools available at my GitHub account. Why did you develop ExtendsClass?# I developed ExtendsClass to pass the time and mainly because, in my work, I started coding less since I became a solution architect, and I missed coding! What next?# First, I still have plenty of features to add to existing tools. Secondly, given some tools are only available via a browser, I am looking to make them accessible via HTTP API. Finally, the platform begins to have some amount of technical debt I'll have to address. I won't have time to be bored! What does the future look like for ExtendsClass and web development in general? Can you see any particular trends?# It's a good question. :) A few years ago, I thought Node would be a tidal wave; I was wrong! Maybe Deno will succeed where Node failed. I am keeping an eye on Python as it's healthy as a platform, but I would not venture to make predictions. What advice would you give to programmers getting into web development?# My advice is to be curious! Nowadays, we find excellent tutorials and video conferences in free access on the Internet, and we can learn a lot of technologies quite quickly. We have no excuse! Who should I interview next?# Perhaps Ryan Dahl so he can tell you about Deno! Any last remarks?# No! Thanks for the interview. :) Conclusion# Thanks for developing the platform, Cyril! I can see there are any tools on the service I'll find useful in my work and I hope others find it as well. See ExtendsClass site to learn more.

"SurviveJS - Webpack" book updated to webpack 5

Webpack 5 is getting near and it felt like a good time to update the book. Although it's still in beta, it seems quite stable already and it's worth experimenting with it. During this work, I went through my long list of TODOs that had accrued since the previous release of the book. The current version of the book supports both webpack 4 and 5 and I've noted about the new features where possible. The best source for all the changes is the official changelog. Although the release has a couple of major features, including better caching and module federation, in a big part it's a clean-up release that should make subsequent releases easier. I've rewritten and reworked a big part of the book and modernized it where necessary. The length of the book is roughly the same and I've tightened the content where I felt this was necessary while dropping references that are now obsolete. If you aren't interested in what has changed, skip straight to the book. webpack-merge - Updated to support TypeScript out of the box# Thanks to Google sponsorship, I was able to go through webpack-merge issues and add first-class support for TypeScript to it. I ended up dropping smart merging from the package as it became impossible to handle all the special cases related to it. Instead, there are now utilities that give you control over details. One last typing related issue remains and it has something to do with the way webpack and webpack-dev-server typings play with together. Overview of the situation# Since the past release, I've been working on occasional client projects and putting a lot of energy behind the React Finland conference as its director. Sadly, the on-going pandemic shattered our plans for the conference and forced us to postpone the physical portion to the next year. In the meantime, we're running online mini-conferences to provide service for our audience. So far we've organized two and there are many on the way. The delay in webpack 5 release has been problematic as the lack of a stable release means I cannot make a paper release of the book either. I feel it was a good time to update the book, though, as when it goes stable, the paper version can receive a long pending update as well. While waiting for the stable version, I hope to integrate changes based on your feedback to improve the book further. Especially client work has generated valuable insights which I've integrated to the book. These days people often use a solution such as Create React App and simply skip configuring webpack but at times custom configuration is needed. Even with CRA, you may find yourself extending the setup using unofficial ways as the defaults won't fit each use case. My client work tends to center around getting webpack configuration under control while improving performance of both the build and the end result. Book Improvements - 2.6# I've released multiple silent releases since the previous public version to improve the book. The past month during which I've been renovating the book has been intense and I feel it's in a much better shape than before. I still use it as my personal main reference for webpack related things and I've tried to develop the book so that you have strong secondary material with more information available should you want to dig deeper. For me, that's the sweet spot for the book. The book has received numerous changes and it's not possible to list them all here. Instead, I've compiled a list of the most important ones: Most code examples have been formatted by Prettier simplifying my workflow. I've given the book a heavy grammar and formatting pass so it should look and feel better than before. Instead of webpack-dev-server (WDS), the book uses webpack-plugin-serve. It's an option that has been built on top of webpack's watch mode although it can run as a server of its own (needed for multi-compiler mode). In practice, I've found it to work better than WDS with complex proxy setups while being compatible with webpack 5 out of the box. As a result, the corresponding chapter has been renamed as Development Server. Instead of webpack-cli, the book uses webpack-nano. Again, I went with a light alternative that's enough for the book and provides compatibility with webpack 5. Instead of html-webpack-plugin, the book uses mini-html-webpack-plugin which I developed with Tobias Koppers and Artem Sapegin. Feature-wise it's much simpler but that's exactly why I've settled on it in my personal usage. That said, for anything that has complex requirements, html-webpack-plugin may be the better option and it's easy to refactor the book's examples to use it instead. As order-wise it makes more sense to explain code splitting before bundle splitting, I swapped the order of the chapters. Given mini-css-extract-plugin has become stable, the Separating CSS chapter is using it instead of extract-text-webpack-plugin. The change was done earlier during a silent release but it's still worth mentioning. Webpack 5 won't introduce first-class support for CSS out of the box so the plugin will still be needed. I've rewritten the Eliminating Unused CSS chapter to use Tailwind CSS and PurgeCSS. Ideally purging shouldn't be needed at all and solutions like Stitches achieve this. It's still a good technique to discuss as it has relevance for legacy projects. The Source Maps chapter has simpler comparisons now to save some space. I found condensing the examples didn't lose anything valuable while making space for new content. The Tree Shaking chapter has more information on how to get it to work with webpack especially with webpack 4 as there are certain prerequisites. The Separating Manifest chapter has been renamed as Separating a Runtime to use more accurate naming. I've rewritten the Build Analysis chapter to contain the tools and services that have emerged. The Performance chapter has more specific advice on how to measure and improve webpack's performance. I also included webpack 4 specific tips. There's a new chapter about Module Federation and micro frontends. I designed the chapter so that you can complete the setup without having to go through the entire book. The target of the chapter is to get you started and to a point where you can learn more about it through the examples available online. I've rewritten the Internationalization chapter to use embed-i18n-webpack-plugin over now deprecated i18n-webpack-plugin. The Testing chapter has been greatly simplified. It still has basic content but I feel often testing is better handled outside of webpack using specific tools designed for the purpose. The Searching with React appendix has been rewritten to use modern React. The Comparison of Build Tools appendix includes the current alternatives to webpack that have emerged. The Glossary has been expanded and made more accurate. You can find the book below: “SurviveJS — Webpack” - Free online edition “SurviveJS — Webpack” - Leanpub edition (digital) A part of the income (around ~30%) goes to Tobias Koppers, the author of webpack. I support his work this way given mine builds on top of his. What next?# My hope is that webpack 5 reaches a stable state soon. While waiting for it, I hope to receive feedback on the content so I can improve it before the next paper release. Given this book is now in a good shape, I can focus on updating the remaining two books and working on my site infrastructure project. I am currently reworking the technical stack of this site while learning a lot about emerging technologies. Conclusion# I hope you enjoy the first webpack 5 version of the book! Note that I'm active at the book Gitter channel if you want to bug me about webpack. You can also ask questions at my AmA.

Midway - A Node.js framework for Serverless - Interview with Harry Chen

Serverless computing is one of those approaches that has taken the world by storm. The idea is to make computing a flexible resource you consume on-demand. Compared to earlier models, it scales in terms of demand instead of requiring an upfront investment on in server infrastructure. Harry Chen has developed a solution, Midway, that makes it easier to develop serverless applications using Node.js. In this interview, we'll learn more about the approach and how it's affecting the technology landscape. Can you tell a bit about yourself?# Harry Chen Hi, I'm Harry Chen, a Staff Front-end Engineer of Alibaba. I've worked on Node.js technology stack for a long time. During this time, I have provided framework and middleware solutions for Taobao and other Alibaba business units. I have been responsible for Serverless Arch standardization specification of Alibaba Group and overall Node.js system infrastructure for Taobao. During this time, I've solved various maintenance and stability issues for full-stack development. I am also responsible for Midway on-premise and open-source development. The work includes the development and maintenance of community open source products, such as Midway, Sandbox, Pandora.js, Injection, and many others. How would you describe Midway to someone who has never heard of it?# Midway is a framework that allows applications written in a pure function pattern to be deployed to various cloud platforms without any code modifications. The idea is to avoid lock-in on a single Function as a Service (FaaS) vendor. Midway Serverless alleviates the pain of migrating traditional deployment pattern applications to elastic serverless platforms. At Alibaba Group, many legacy Node.js applications are still working online and require heavy operational maintenance. All of this can be costly. Midway Serverless is the solution we adopted to accelerate the migration and reducing the costs. There is no action to be taken to deploy the app to a FaaS platform other than composing a single YAML configuration file with Midway Serverless. As React Hooks gain popularity rapidly, coding with functions is becoming more popular. Midway previously is built based on the decorators and dependency injections to provide inversion of control, augmenting JavaScript classes to be basic grouping units of code snippets. Writing in a function pattern doesn't mean it is not possible to achieve inversion of control. Midway Serverless apps can share the same coding pattern between the web and server-side. How does Midway work?# Midway provides a set of runtime adaptation tools that can smooth out the different cloud vendors in the community. These tools encapsulate and standardize the different cloud vendor access parameters, help migrate different types of Node.js products (applications and functions) to the cloud vendor, and also provide their own lifecycle for extensions. All of this makes on-premise deployments easy. On the other hand, Midway itself is a framework that makes code decoupling efficient through TypeScript + IoC capabilities. How does Midway differ from other solutions?# Usually the common FaaS handlers look like this: // for events exports.handler = (event, context, callback) => { callback(null, "hello world"); }; // for HTTP exports.handler = (request, response, context) => { response.send("hello world"); }; Let's check out the Midway Serverless solution: // Midway IoC decorator to declare this class to be provided @Provide() export class MyFirstFunctionClass { @Inject() ctx; // first function, for events @Func("api.user") async myFn1() { return "hello world"; } // second function, for HTTP @Func("api.book") async myFn1() { this.ctx.type = "html"; this.ctx.body = "<html><body>hello world</body></html>"; } // third function @Func("api.store") async myFn1() { const data = await request("http://xxxx/api/data.json"); this.ctx.set("X-HEADER-TIMEOUT", 2000); this.ctx.body = { data: { success: true, result: data, }, }; } } It's obvious that the first option seems easier to start quickly while being clearer. On the other hand, we can almost reuse the Midway Web Framework's decorator, even arbitrarily port the IoC-formed code between Midway Web and Midway Serverless. Furthermore, Midway provides a runtime isolation architecture that is unique in the community. It not only allows functions to run on top of the architecture, keeping the code isolated, but also allows the original application to be migrated quickly, which maintaining a relatively elegant state. There might be some frameworks like Midway's solution. However, we provide the ability to convert functions and applications, in addition to the traditional decorator for different scenarios, so that the application can decide whether to deploy to functions or applications at build time. Doing this allows developers to focus on the business itself without worrying about the platform they are deploying to in the first place. Why did you develop Midway?# In the past, we used a traditional function architecture to support our logic. After we had been using it for a while, we realized that the cloud vendor itself didn't provide a good package. The required functions had to be combined or even rewritten, and the community didn't have a web framework specifically for Serverless scenarios, which made the development of our business slow. The experience made us think about the need to solve the problem of migration between different platforms, which led to Midway's first goal on the Serverless system: to prevent vendor lock-in. After designing a set of Serverless lifecycles and implementing some function runtimes, we realized that the community had the same issue. While the Serverless Framework did some things, it didn't smooth out the differences between platforms at the code level. As a result, we decided to open source Midway Serverless and make this capability available to the community. What next?# We're working on the second major version of Midway, which will provide a combination of full-stack applications, functions, and front-end code to make the whole development experience better. At the API level, we'll be opening up more scenario decorators, such as @Socket, as well as some logic processing decorators, such as @Pipeline. From a functional perspective, Midway will evolve into an ecosystem that developers can use out of the box, similar to Spring Boot. What does the future look like for Midway and web development in general? Can you see any particular trends?# Whether it's the current full-stack, Serverless Arch, edge computing, AI, 3D, etc., the web developers will use Node.js in many areas, and Midway will also provide capabilities in different scenarios, which will facilitate the Node.js ecosystem evolution and web development. What advice would you give to programmers getting into web development?# Web development is a creative position, and we should explore more than web technology itself, like Serverless Arch WebAssembly etc., to look at the big picture and become full application engineers. Who should I interview next?# Eric Li, contributor of pipcook. Conclusion# Thanks for the interview, Harry! I can see why developed Midway and I hope other developers find the solution as well. Serverless computing is an emerging space and it looks like Midway could become a vital part of it. You can find Midway on GitHub and follow Midway news on Twitter.

Enroute - Envoy Route Controller - Interview with Chintan Thakker

When developing APIs, you often get questions like how to combine and control data coming from different sources. That's where proxy-based approaches come in. To understand one such in better detail, I am interviewing Chintan Takker as he developed a solution called Enroute. Can you tell a bit about yourself?# Chintan Thakker Hi There, My name is Chintan Thakker, and I am the founder at Saaras Inc. Our primary product is the Enroute Universal Gateway, an automation-first API Gateway built on Envoy Proxy. Both are open source solutions. How would you describe Enroute to someone who has never heard of it?# Enroute makes it easy to run Envoy as an API Gateway. You can use it for microservices running inside Kubernetes or any service running standalone when there is no Kubernetes. What makes it easy is simple REST APIs to configure the Standalone gateway or CRDs to configure the Kubernetes Ingress Gateway. Plugins provide the ability to add fine-grained route-level or global policies and traffic control. Kubernetes is an open source solution for scaling containerized applications. How does Enroute work?# Enroute works by abstracting away complexities of underlying proxy and providing a simple API, both for Kubernetes and Standalone use-cases. Enroute makes it flexible to add traffic policies at global or route level using a plugin (or filter) architecture. Add a plugin and control all the traffic on the fly, without any restarts or reboots. Simple API calls are all that is needed to get fine-grained control over the traffic. How does Enroute differ from other solutions?# Enroute is an API gateway with batteries included. Since it runs on Envoy, there is a lot of functionality that is included in the solution. Other solutions typically charge for it or have to do extensive development for it. Enroute is also 100% automatable and there are APIs for everything. Enroute state management is also flexible as you can store state in a database, or run stateless without any database. When running stateless, even a file from GitHub can be used to completely restore the gateway state. For Kubernetes, the state is stored in CRDs and state management is completely Kubernetes-native without any external databases. Enroute is the only gateway on Envoy proxy that works for both Kubernetes Ingress and Standalone use-cases. Typically solutions either target one or the other. A majority of users have a mix of workloads, and this capability comes in handy, especially with the same consistent policy model across all deployments. And running Envoy makes it a super performant solution. Why did you develop Enroute?# We saw a clear gap in the automation capabilities of the existing gateways. We took an api-first 100% automation approach when building Enroute. We also see a lot of organizations want to adopt Envoy for their use-cases. Now, you'd think why Envoy? Envoy is highly performant since it is written in C++. Some were running service meshes that have Envoy. Running Envoy outside provided the same consistent experience with deep visibility, superlative performance and end-to-end tracing. We realized that an API Gateway like Enroute can accelerate time to value for these organizations while solving API Gateway use-cases for all their workloads. Adoption of Kubernetes is on the rise. With a solution like Enroute, Kubernetes adoption is really simplified, since the same policy for standalone gateway can be easily transferred. What next?# We just released complete OpenAPI automation capabilities and this has a strong security benefit as it eliminates shadow API blind spots. Enroute can now ingest OpenAPI spec and help flag shadow APIs. We see an opportunity in Security and Compliance. That is another big focus for us and we are actively building features for that. What does the future look like for Enroute and web development in general? Can you see any particular trends?# Cloud adoption is accelerating and there is a need for cloud-agnostic multi-cloud solutions that just work. The ability to integrate with different clouds is a necessity. Enroute with it's API first approach is well-positioned for that. We also see a clear trend in developer-driven self-serve use cases where a small team is responsible for driving operations for a multitude of services. These teams rely on automation. Developer tools that enable these trends will be key to attaining this agility. Again, we are excited Enroute was built with these goals in mind. What advice would you give to programmers getting into web development?# Web developers are the folks that try out the latest tech and set the trend for what comes next. Stay current, and have fun doing it! Who should I interview next?# A developer who just went from zero-to-production on their first project. Any last remarks?# Thank you for this opportunity. Conclusion# Thanks for the interview, Chintan! Enroute sounds like an excellent solution for anyone dealing with APIs and orchestrating their usage. Backend development has changed a lot during the past decades, and I agree with your assessment that we'll get even more done with less in the future as this type of work becomes automated. You can find Enroute on GitHub and get news about it on Twitter.

Detox - Testing React Native - Interview with Mykola Solopii

Testing mobile applications is a tough topic as you have to worry about different devices, and the interaction model is challenging. Detox is a solution built specifically for React Native, and in this interview we'll learn more about the approach from a QA automation engineer, Mykola Solopii. Can you tell a bit about yourself?# Mykola Solopii I am a senior test automation engineer at Career Karma with plenty of experience testing different types of applications. How would you describe Detox to someone who has never heard of it?# Detox is an open-source end-to-end (e2e) automation library for mobile apps, for both iOS and Android. How does Detox work?# In our case, Detox runs the Career Karma mobile app and manipulates it like a real user (tap, type text, scroll, etc.). When Detox runs your tests, two processes start in parallel: It runs your app (simulator or real device) It runs your test suite on the Node environment and allows sending native commands to your application over WebSocket protocol With such an approach, you can run your tests smoothly, and Detox will build your application and interact like a real user. Emanuel Surino's article End-to-end testing in React Native with Detox shows what a Detox session looks like in practice. What do Detox tests look like?# To give you a better idea of what testing with Detox looks like, consider a test from our codebase: import { OnboardingFlowScreens } from "../screens/OnboardingFlowScreens"; import { CommunityScreen } from "../screens/CommunityScreen"; import { SignInScreen } from "../screens/SignInScreen"; const onboardingScreen = new OnboardingFlowScreens(); const signInScreen = new SignInScreen(); const communityScreen = new CommunityScreen(); describe("Sign in flow", () => { beforeAll(async () => { await device.launchApp({ // Grant permissions in advance, because it's impossible // to dismiss permission modals in runtime permissions: { notifications: "YES", photos: "YES", camera: "YES", medialibrary: "YES", }, }); }); it("User should be able to Sign in", async () => { await detoxHelper.waitForElementToBeVisible( onboardingScreen.signInButton ); await onboardingScreen.signInButton.tap(); await waitFor(signInScreen.emailField) .toBeVisible() .withTimeout(5000); await signInScreen.emailField.replaceText(email); await waitFor(signInScreen.passwordField) .toBeVisible() .withTimeout(5000); await signInScreen.passwordField.typeText(password); await signInScreen.submitButton.tap(); await waitFor(communityScreen.communityHeaderText) .toBeVisible() .withTimeout(5000); await expect( communityScreen.communityHeaderText ).toBeVisible(); }); }); How does Detox differ from other solutions?# The crucial difference is that Detox is the grey box automation tool. It means that Detox can automatically synchronize the test execution with your app. Detox understands when the asynchronous operation is completed within your app. After that, Detox continues executing your test. You get the ability to run your tests faster, and they are more stable. Why do you use Detox?# It's a piece of cake to use Detox. Usually, mobile automation is intricate, and it's a brand new technology compared to the web. There are not so many mobile automation tools that can manipulate mobile apps efficiently. Our application is built on React Native and Detox knows how to deal with such technology better than its closest competitors (e.g. Appium) because it was designed specifically for this technology. Detox is like Protractor in the mobile automation world, aiming specifically for React Native applications and much more. What does the future look like for Detox and web development in general? Can you see any particular trends?# In my subjective opinion, Detox's future depends on the popularity of React Native. If more and more people start using React Native for developing a mobile app - more QA automation engineers will be using Detox for e2e testing. It is an open source library supported by Wix, which can ensure Detox growth in the future. If we are speaking about some trends in web development - you never know as web development is changing rapidly. Literally, 10-15 years ago, people could not imagine the opportunities we have now. So, it is tough to predict the future of web development. I hope it will make developers and users happier! What advice would you give to programmers getting into web development?# Find compelling motivation! Begin tinkering with websites and apps. If you have an eye for detail, you can even explore starting out as a manual QA tester. Conclusion# Thanks for the interview, Mykola! I've met Rotem Mizrachi Meidan, the creator of the tool personally, and he was presenting at the first React Finland about Detox. You can find Detox on GitHub.

Synthetics - Monitor availability and performance of your website and APIs - Interview with Siva Kaliappan

Anything you don't measure or test, you cannot improve. The wisdom applies particularly to web as we develop our websites and applications. Without any kind of monitoring solution in place, you are flying blind. To learn more about the topic, I am interviewing Siva Kaliappan from Sematext. Can you tell a bit about yourself?# My name is Siva Kaliappan and I am the Product Lead for Sematext Synthetics. I primarily work on the Sematext Monitoring product line across all parts of the stack from the front-end, backend, and agents. Lately, I have mainly worked on the design and development of Sematext Synthetics. I am also an open-source developer and author of the popular LogTrail Kibana plugin included in Sematext. How would you describe Synthetics to someone who has never heard of it?# Sematext Synthetics is like your trusted user who monitors your APIs/websites 24x7 from multiple locations around the globe and alerts you when things go wrong. Sematext Synthetics also provides detailed reports on the availability and performance of your web applications. How does Synthetics work?# At a high level, Synthetics works by periodically sending requests to your HTTP endpoints or launching your website in an embedded browser from multiple locations around the globe and recording various performance metrics and errors, if any. Then we check if the actual results meet the expectations and also persist the results for reporting. As a user, you start by creating an HTTP or Browser monitor, specify the interval to run, list of locations to run the monitor from, monitor specific details, and a list of expected conditions for the monitor to pass. An HTTP monitor is used to monitor HTTP endpoints like APIs or Web URLs. HTTP monitor sends a single HTTP request to the configured HTTP endpoints with specified request details and records the response and performance timings. A browser monitor is used to monitor a web page or user journey. Browser monitor uses a Puppeteer API based script to drive an embedded Google Chrome browser. Sematext Synthetics runs the script and records the various performance timings and errors, if any. It also captures one or more screenshots of the loaded page for visual inspection. How does Synthetics differ from other solutions?# One of the major differentiators for Sematext Synthetics is it being part of Sematext Cloud, the integration it provides with infrastructure monitoring, log management, and real user monitoring. While you can independently use each of these solutions, you can reap the benefits of their tight integration. Each of them is just a click away makes debugging a lot faster & easier. And with the flexibility to create custom reports and have data from all the above in a single dashboard, you could create a completely customized view of your application performance that suits your needs. Why did you develop Synthetics?# After releasing Experience, our Real User Monitoring solution last year, we felt Synthetic Monitoring would be the right addition to Sematext Cloud. Now with Synthetics, our customers can get end-to-end visibility of their applications from a single place. Also, we needed availability monitoring for our applications. We have been using Synthetics to monitor our applications and also found it uncovering issues with our APIs on a few occasions. What next?# We have lots of exciting features lined up for Sematext Synthetics. Consider a part of our roadmap below: Support for a browser-based recorder to quickly record Synthetics scripts for non-developers. The ability to monitor resources behind firewalls using private agents for enterprise users. Public status pages to share your API status with your customers. And of course, we listen to our users and customers always and will be adding features based on their input, too. What does the future look like for Synthetics and web development in general? Can you see any particular trends?# We see more interest in below areas in the future concerning Synthetic monitoring: The performance metrics measured in the Synthetic environment are moving from browser-centric metrics like page load time, DOM interactive, etc. to user-centric metrics like First Contentful Paint (FCP), Time To Interactive (TTI), Speed Index, etc. Adding more intelligence around failures - When things fail, instead of just alerting the user that they failed, providing more information for the user to debug and identify the root cause quickly. In some cases, point them to possible root causes. We think Sematext is in a perfect position to do this since we can leverage the end to end visibility of application performance using the Sematext platform. Integration of Synthetic monitoring into the development workflow to catch web performance issues much earlier in the application lifecycle. It would be made more accessible by providing the ability to run Synthetic tests as part of the CI/CD pipeline. As far as web development trends are concerned, we see below area getting more interest: A move away from classic multi-page websites in favor of single-page apps, even informational/presentational websites thanks to the emergence of static page generators for popular JS frameworks like Gatsby.js. Single page applications are a defacto standard for any web development. Again thanks to JavaScript frameworks. We also see that web accessibility is becoming a standard, due to recent regulations and inclusion in most JavaScript frameworks. To summarize, web standards are driven by major JavaScript frameworks and we don't see this changing soon. There will always be attempts to pass something new. But until big players adopt this, it will not be a standard. What advice would you give to programmers getting into web development?# As computers and mobile devices are becoming faster, the websites are becoming slower. It should be the other way around. The average page size keeps increasing. As someone new to front-end development, I would advise you to think about the web performance from the beginning. A website that loads quickly and consumes fewer resources results in happier users and a lower carbon footprint. Let us all help build a more sustainable web! Who should I interview next?# Otis Gospodnetic, Sematext Founder. Any last remarks?# We are excited about the release of Sematext Synthetics and its future road map. Thanks for having me and excellent work with SurviveJS! Conclusion# Thanks for the interview, Siva! If you haven't already, adding a monitoring solution to your toolkit is a valuable asset for a web developer as that will allow you to put the effort where it matters and understand when things go wrong in a way you might not expect. Sematext offers a free 30-day trial. Give it a go and send Siva your feedback!

JavaScript Security - Interview with David Balaban

Given JavaScript dominates the world of web development, it has become a good target for malicious actors. Furthermore, the flexibility has led to a plethora of attack vectors for cybercriminals making JavaScript security an important topic for web developers. In this interview, I am discussing with David Balaban, a cybersecurity specialist. To learn more about the topic, read the interview with Liran Tal. Can you tell a bit about yourself?# David Balaban My name is David Balaban. I work with MacSecurity.net and Privacy-PC.com outlets specializing in a vast range of cybersecurity issues, from Mac and Windows malware analysis and ransomware mitigation to software evaluation and the fundamentals of secure web development. The latter has been my particular focus for quite some time, with insights into JavaScript security helping me expand my professional horizons. How would you describe JavaScript security to someone who has never heard of it?# JavaScript is a lightweight scripting language best known for its use in dynamic web page environments. Its purpose is to specify how a web page responds to specific events initiated by the user. A few examples include form submission, animations, and events triggered when you press a button. To deliver this interactivity, JavaScript is integrated into an HTML page, where it operates in tandem with what is called the Document Object Model (DOM), the underlying tree structure interface of that page. Whereas this tight interplay is the basis for ensuring a seamless user experience, it is also the weakest link in JavaScript implementation because it can allow attackers to deploy harmful scripts over the web and execute them on users' computers. In other words, the most significant security challenge about JavaScript is that threat actors can mishandle it to inject rogue scripts triggering unwanted website behavior. Combined with classic site hacking, this activity can become a launchpad for the massive distribution of malicious code in a highly covert way. The security side of the matter is all about patching loopholes in JavaScript deployment to prevent perpetrators from tweaking the code for nefarious purposes. What is JavaScript security?# There are three significant reasons why writing secure JS code is easier said than done. All of them stem from the very design and capabilities of JavaScript. The dynamism of the language# JavaScript is dynamic, asynchronous, and scarcely typed. While being on the plus side of JS, these hallmarks also make it reasonably easy to exploit. For instance, the use of a sketchy ‘eval’ function and the injection of third-party code via ‘script src’ may enable an attacker to execute stings in runtime. As a result, you cannot “statically guarantee” that your code will work in a specific way. Asynchronous callback functions can be invoked through the likes of the setTimeout method, and XMLHttpRequest object statistically conceals the bulk of treacherous JS errors. Extensive capabilities# JavaScript has assumed quite a few unique characteristics in the course of its evolution. These include prototypes, first-class functions, and closures. They further bolster the dynamic nature of JS, but – you guessed it – they may hamper the process of writing secure code. Close ties between JavaScript and DOM# As I previously mentioned, this binding is the foundation of a smooth user experience. JavaScript modifies DOM dynamically so that the browser instantly reflects these changes. However, the fragments of code used for this interoperability are highly susceptible to errors and third-party interference. Mainstream attack vectors# The mainstream attack vectors piggybacking on the weaknesses of JavaScript include: Cross-site scripting - XSS Cross-site request forgery - CSRF Server-side JavaScript injection - SSJI The trio can fuel malware propagation, identity theft, and account takeover. Security is a matter of minimizing or eliminating the above risks. All it takes is writing flawless code that cannot be exploited by attackers. JS code analysis using specially crafted tools is what bridges the gap between web development and immaculate JavaScript implementation. What are the distinctive features of JavaScript security solutions?# All instruments that assess the JS code for errors and vulnerabilities fall under one of the following categories: JavaScript code testing frameworks. These tools automatically check the JS code for common syntax errors. From where I stand, the most important ones are QUnit, Jasmine, Mocha, and jsTestDriver. There are also APIs such as Selenium and PhantomJS that emulate browser behavior when specific code is executed. Static analysis tools. Their purpose is to inspect the code for compliance with web development best practices. They help tidy up your code by pinpointing redundant strings and scrutinizing dependencies between JS functions, Cascading Style Sheets (CSS), HTML tags, and images. My personal favorites include WARI, JSLint, Google Closure Compiler, and WebScent. Solutions for dynamic JS code analysis. These traverse your code for anti-patterns and help you better understand the ties between components and events they trigger. I prefer utilities called DOMPletion, JSNose, and Clematis. Why did you start working on JavaScript security?# In my experience, cybercrime is a complex fusion of different attack mechanisms, and JavaScript abuse is one of them. Malicious actors are increasingly embedding dubious JS code into compromised web pages to deposit malware onto visitors’ machines silently and orchestrate large-scale online scams. It is what encouraged me to dive into this particular security domain. What next?# I am going to ramp up efforts to explore the ways cybercriminals repurpose JavaScript code to spread predatory software in general and the increasingly prolific Mac adware, in particular. What does the future look like for JavaScript security? Can you see any particular trends? JavaScript use cases are continuously expanding beyond the original realm. It is increasingly used for developing applications, both mobile and desktop ones. Environments such as Node.js facilitate JavaScript deployment on web servers. More implementations mean there are potentially more exploitation scenarios that need proper response security-wise. Conclusion# Thanks for the interview David! Although JavaScript lets developers create complex web applications with a relative ease, it also comes with a cost in terms of security. Being able to create secure JavaScript code is a critical prerequisite for making the internet a safer place. To learn more about the topic, I recommend checking out OWASP Top Ten as it lists the main threats and explains them in a great detail.

MonoLisa - Font follows function - Interview with Marcus Sterz

As a developer, font is something you end up using every day at your work. One year ago I teamed up with Andrey Okonetchikov and Marcus Sterz, to create a new typeface for programming. In this interview, you'll learn more about MonoLisa from Marcus' perspective. Can you tell a bit about yourself?# Marcus Sterz I am a type designer from Vienna, Austria. Coming from a background as a graphic designer, I started developing typefaces in 2008. My Foundry name is FaceType; we create retail and custom typefaces. How would you describe MonoLisa to someone who has never heard of it?# Developers need well crafted monospaced fonts to read and write code. MonoLisa lets you do that as easy and fast as possible. It did not derive from the premise what it should look like design-wise but rather what the purpose is: font follows function. What makes MonoLisa special?# The main goal was to create a typeface that’s easy and fast to read, which prevents you from mixing up letters (which leads to errors). A second focus was on keeping the limited space as well balanced as possible – in monospaced typefaces, each letter or symbol is given the same amount of space, which makes it hard to keep the proportions eye-pleasing. Ideally, a font does not get noticed when you read the text; it just there to transfer information because it’s about content, not letter shapes. How does MonoLisa differ from other fonts?# I’m afraid the question is not specific enough. MonoLisa differs from most typefaces because most of them are proportional. MonoLisa differs from most monospaced typefaces because most of them are not well designed. Why did you develop MonoLisa?# The idea to create MonoLisa came after an exciting conversation with Juho Vepsäläinen und Andrey Okonetchnikov about fonts for developers. Although developers look at type all day, most of them seem to use suboptimal fonts for their work. I wanted to give them an excellent option for everyday use. What next?# Illustrations! What does the future look like for MonoLisa? Can you see any particular trends in font design?# The main work is done, but I intend to expand the languages covered to Cyrillic and Greek. Also, many more code related symbols will get implemented. Last month I did not observe any new specific trends. What advice would you give to people getting into font design? What should they learn?# Just start, don’t expect yourself to come up with a masterpiece. Make mistakes, learn from them, make better mistakes. Who should I interview next?# Try Matthew Carter, a type designer. Conclusion# I hope you'll enjoy using MonoLisa. I've been using it roughly for a year and there's no going back. :)

Debugging JavaScript - Interview with Mehdi Osman

Debugging JavaScript is one of those topics where people tend to be divided into two camps - those that console.log and those that use a debugger. In this interview, I am learning more about the topic from Mehdi Osman, the CEO of a company called Asayer. Can you tell a bit about yourself?# Mehdi Osman My name is Mehdi Osman, and I'm the founder of Asayer, a frontend monitoring tool. Before that, I spent a few years as a frontend programmer working with C# and something called Silverlight - a deprecated framework for building rich internet applications - before it was taken by storm by HTML5 and JS frameworks. How would you approach debugging JavaScript?# First and foremost, I try to reproduce the issue. Then I apply three simple techniques for hunting the bug: Avoid using console.log() unless it's necessary Pause the JS debugger on caught exceptions then walk through my code one line at a time Test all my assumptions in Console for a potential fix before applying it to my codebase You can learn more on this subject by reading How to debug a ReactJS application on Medium. How do you prefer to debug JavaScript?# I use Chrome DevTools. It has everything I need for debugging and auditing my application, and it comes shipped with your browser. Is debugging JavaScript any different than debugging other languages?# Contrary to a backend, debugging frontend applications is a multidimensional problem. Your code may run differently from one browser to another. The rendering of your pages can be broken on mobile devices. Your application's overall performance depends on many factors - such as CDN, client device, third-party APIs, slow backend, and CPU/memory load - which can significantly affect your user experience. Why did you develop a service to help with debugging JS applications?# DevTools are useful, but it's applicable only locally in your development environment. Reproducing and fixing issues in production is another story and requires a different approach. That's why we built Asayer, a monitoring tool by, and for frontend developers. Asayer shows you a video replay of your user session synchronized with all technical details you need to reproduce the issue: network activity, JS errors, console logs, store actions/state, backend logs, and more. In other words, it's like having your browser's inspector open while looking over your user's shoulder. Asayer Application What next?# We want to capture performance-related metrics such as memory usage, framerate, CPU load, speed metrics, crashes, and more. These issues are hard to understand, and most of the time go unnoticed since only very few users would bother to report them. We'll also release: New plugins for capturing the state from NgRx, MobX, and VueX stores. GraphQL support for troubleshooting queries. Integrations with Log Management tools such as Datadog or Sumo. This way, we make bug reproduction even easier by synchronizing backend logs with user session replay. Support of Github Issues, Jira and Trello, for easy reporting of tickets without leaving Asayer. Our goal is to capture the full context of every user session so developers can easily measure the impact of technical issues on their user experience, and proactively fix them. What does the future look like for debugging JavaScript and web development in general? Can you see any particular trends?# The demand for rich and high-performing interfaces will continue to increase. Complexity is shifting--from once monolithic backends--to frontend applications and there's a lack of tools when it comes to shipping quality code and monitoring applications in production. The frontend is the last frontier when it comes to productivity and observability tools. I bet we'll see exciting initiatives over the next few years. What advice would you give to programmers getting into web development?# Start with the basics (JS, HTML, CSS) as you can reuse them with any framework. Think about performance from day one as it'll force you to follow best practices. Pick any JS framework, but make sure to master it enough before getting hyped up about the newest one. Keep learning and contribute to the open-source community whenever possible. Who should I interview next?# I can think of: Pierre Burgy - Founder of Strapi (Headless open-source CMS). Stéphane Chopin - Creator of NuxtJS (a meta framework on top of VueJS). Any last remarks?# Thanks for having me and excellent job with SurviveJS. :) Conclusion# Thanks for the interview, Mehdi! I agree there's a lot we can still do when it comes to debugging frontend. There's a surprising amount of complexity, and especially the amount of devices and platforms we have to support as developers is immense. You can learn more about Asayer online, and it's free to try in personal development to get an idea of how the service works.

React Cosmos - A development environment for ambitious developers - Interview with Ovidiu Cherecheş

Developing user interfaces can be complex as you have to think about different ways it's going to be used and you have to design patterns and components to use for defining it. That's where tools like style guides come in. Earlier, I interviewed Artem Sapegin about react-styleguidist. To continue the series, this time I'm interviewing Ovidiu Cherecheş, the author of React Cosmos. Can you tell a bit about yourself?# Ovidiu Cherecheş I'm Ovidiu, a web developer from Romania passionate about building user interfaces and making my tools. With React Cosmos, I married both together and created an open-source tool that makes React devs like me more productive. How would you describe React Cosmos to someone who has never heard of it?# React Cosmos is a dev environment for building scalable, high-quality user interfaces. It enables you to develop React components in isolation and create UI component libraries. React Cosmos How does React Cosmos work?# React Cosmos works around component fixtures, which allows you to bookmark component states by defining sample component inputs. The sum of component fixtures creates your component library, which React Cosmos exposes under a beautiful UI designed to make you more productive. // buttons.fixture.js export default { primary: <PrimaryButton>Click me</PrimaryButton>, primaryDisabled: ( <PrimaryButton disabled>Click me</PrimaryButton> ), secondary: <SecondaryButton>Click me</SecondaryButton>, secondaryDisabled: ( <SecondaryButton disabled>Click me</SecondaryButton> ), }; React Cosmos needs to understand your source code to work, and it hooks into your build pipeline to do so. Technically this means that you can use React Cosmos regardless of how you write your code, be it ES20XX, TypeScript, Flow, etc. But it also means that sometimes you need to push a few options into a config or two to make the integration work. Example views How does React Cosmos differ from other solutions?# React Cosmos is detail-oriented and straightforward. It does only a few things, and it does them well. React Cosmos works exclusively with React. Unlikely other solutions, however, it can be integrated with other bundlers aside from webpack. Why did you develop React Cosmos?# I started working on my first single-page app back in 2014. The overwhelming complexity quickly drove me to want a dev tool that allows me to build, as well as test, one component at a time. The team I was a part of created a prototype, a unique /playground route, where we hardcoded inputs for specific components. This workflow made us more productive, especially when debugging complex components. Today's React Cosmos is the evolution of that early prototype. What next?# So many things - automatic fixture generation, rich inputs for the props panel, showing multiple fixtures side by side, first-class support for other bundlers like Parcel, an official API for plugins, and the list goes on. What does the future look like for React Cosmos and web development in general? Can you see any particular trends?# If we were to let our imagination go nuts, I'd say a full-blown nocode UI builder. That's the dream, right? At least that's what everybody's talking about nowadays. But I lean more on the programmatic side. I look at the dev tools that sank in during the past years, and they're not groundbreaking tools that redesign how we work. They're slightly better versions of existing tools. Without shifting paradigms, projects like TypeScript and VS Code are beyond popular in 2020. I see great potential in improving on known issues and delivering the best user experience to existing solutions. For this reason, React Cosmos will focus on improving existing workflows first and creating new ones second. At the same time, if we can automate repetitive work, we should. And UI work is crazy repetitive. So I also believe we'll slowly abstract the boring parts of web development. Which part will be automated next? Well, that's a million-dollar question. What advice would you give to programmers getting into web development?# Choose your tools wisely, but don't focus too much on tools. They make all the difference, and you should carefully research the best tool for your project. But once you're productive with a set of tools, stick with them for a while. Keep sharpening that saw, but don't change it too often, as otherwise, you'll never discover its full potential. Who should I interview next?# Alex Moldovan. Conclusion# Thanks for the interview, Ovidiu! I like particularly the way it's possible to define variants to describe components in different states. In design it's about thinking through different options and figuring out good interfaces to capture the purpose of the given component or pattern. You can learn more about React Cosmos on the web. Consider also starring the project on GitHub and following React Cosmos development on Twitter.

Webix - Declarative UI Framework for Rapid Development - Interview with Maksim Kozhukh

If there's something that has changed during the past few years, it's the way we develop user interfaces using JavaScript. Earlier we've learned about solutions, such as Reakit. Now it's time to look into a library designed especially for rapid prototyping. I'm interviewing Maksim Kozhukh from Webix to learn more about their approach. Can you tell a bit about yourself?# Maksim Kozhukh I am Maksim Kozhukh, the head of Webix and a software architect with 20+ years of experience and extensive expertise in web technologies. I started my career in Netscape age and came through all web industry trends, disasters, and inventions. Webix is one of the many projects I am managing right now. My substantial experience in the development of full software products as well as single web components led to Webix UI Library. I embodied my ideas and web development approaches in this project. How would you describe Webix to someone who has never heard of it?# Webix is the JavaScript UI library for business web apps development. The goal of Webix is to simplify and speed up the creation of the UI layer of web applications while keeping the related expense low. As we worked on the product and gained feedback, we realized that Webix is popular, especially amongst business analysts. They use it to prototype high-level user requirements or to deliver some form of a visual proposal on the pre-sales stage. The approach is becoming more popular, especially in software outsourcing. We also noted a significant number of junior web developers amongst our users. That implies the library is easy to pick up and use, even with a modest amount of experience. Working with Webix# Starting with Webix, you will work inside the Webix environment and work with a large but limited number of UI controls and widgets. On the other hand, you will save some significant amount of development hours that you can use for any other goals. Making the UI with the JS library or from scratch, you will probably get rise quite similar result but with different efforts. And honestly, dealing with dashboards, admin panels, users-lists, you don't need to slide your imagination on waves of inspiration. Mostly, you need the regular data grid, and you need it now. So why not use a ready-made one? Tree UI example# To give you an example, consider that you need a tree view with the following features: Tree indexer Data sets connected to the tree's branches UX items like a header and draggable border (resizer). It should look something like this: Tree UI mockup It would likely take at least a couple of hours of work if developed from scratch, not to mention design required. In Webix, above would look like this: webix.ui({ rows: [ { type: "header", template: "My app!" }, { cols: [ { view: "tree", data: tree_data, select: true, gravity: 0.3, }, { view: "resizer" }, { view: "datatable", autoConfig: true, data: grid_data, }, ], }, ], }); That's all the code you need to declare this particular piece of functionality. What are the key advantages of Webix?# Webix provides more than a hundred UI widgets out of the box, meaning you'll end up writing less code yourself. For example, Webix provides a data table widget with 21 features like drag-n-drop, sorting, filtering, data validation, clipboard support, fixed areas, advanced editors, rowspan, colspan, grid grouping, vertical headers, sparklines, subviews, and more. The functionality is themeable, and several themes are provided out of the box. The rendering speed of Webix is good, and this becomes vital as you'll application begins to consume more data. Besides, we've put special attention to documentation and support. How about the disadvantages?# It's possible you'll encounter challenges in specific use cases as a framework cannot cover each corner. I also wouldn't recommend Webix for any hybrid apps where you use the web on mobile. Why did you develop Webix?# It started as an internal framework for our software outsourcing division. The key goals were to: unify the UX strategies, reuse the source codes, optimize the performance and rendering speed, change the pure JS to classic object-oriented programming approach. Initially, we developed several commercial products and internal tools with the help of this library and received encouraging feedback in return. As a result, we decided to share the framework with the JavaScript community. Doing this led to tons of features requests, customization questions, compatibility issues, and so on. Generally, Webix is not the product, but the team and we build our library based on users' feedback. We can say that the JavaScript community inspired us to release this product on the market. What next?# We are actively gathering feedback, and we try to anticipate the wishes of our users. Soon we plan to release new complex widgets for electronic document management, user management, and data reporting. There will be complete SPA applications with elaborate design and business logic. In addition to that, we have started to develop templates catalog and design presets. These are small but nice perks, which will allow our users to save additional time for the development and prototyping of their applications. Alongside we are developing WYSIWYG editor, which will allow UX designers to use Webix without programming. Besides, we have finished working on the tight integration with Salesforce, so we are going to offer our clients highly productive components for custom Salesforce solutions creation. And of course, the library itself will be further developed and improved. It will become more productive, compact, and convenient with every new version. What does the future look like for Webix and web development in general? Can you see any particular trends?# It's a difficult question. On the one hand, we notice the development heading to the web and clouds. The share of mobile solutions is growing. But on the other hand, the maintenance of the new web services and platforms is still on the client-server business applications. That's why so far, I don't see any preconditions for the revolutionary changes in the sphere of web development or Webix in particular. The other tendency we are observing is the avalanche-like growth of the population of the massive web frameworks and it's affecting Webix as well. We have noticed some Webix competitors going in this direction, leaving the market of UI libraries. As a result, the position of Webix is becoming even stronger. It has been a severe trend lately that we can't ignore. Today we are following this "battle" when users and developers are choosing one framework or the other. We are considering which direction to follow to contribute our expertise there. But the choice has not been made yet. Who should I interview next?# We can recommend a similar product, DHTMLX and xbsoftware as they work with startups. Conclusion# Thanks for the interview Maksim. I feel there's a lot of power in a declarative approach and it seems like many UI approaches are headed to that direction. Note that if your project is open source and non-commercial, then you can apply for a free license and you can learn more about Webix on their site.

VPNpro - How do VPNs work - Interview with Olivia Scott

If there's one web technology I never looked that much into, it's VPNs (Virtual Private Networks). In these times when privacy is more valued than ever, the technology has become more relevant to many users and developers. To understand the topic better, I am interviewing Olivia Scott from VPNpro. Can you tell a bit about yourself?# Olivia Scott My name is Olivia Scott, and I work for VPNpro. The world of cybersecurity is my passion - I am well-versed on the general topic of data safety and WordPress vulnerabilities in particular. I used to test privacy tools, including VPNs, so I am intimately familiar with the industry. How would you describe VPNs to someone who has never heard of them?# Explaining tech concepts to newbies can be difficult, but I find that the post office metaphor is helpful for understanding VPNs. Imagine you're forced to provide a return address whenever you're sending a letter to someone - the receiver would know your address, whereas the postal service would know both your address and the receiver. Internet works this way, and you can't access anything on the web without "providing" your IP address (which can just be used to identify you). That means the website you're visiting knows your IP address, whereas your ISP knows what site you've visited. Using a VPN is kind of like sending your letter to a nameless address, where someone would receive it, repackage it, and send it on to the real receiver (and repeat this action on the way back) - the receiver doesn't know who or where you are. The post office doesn't know who you're corresponding to either. Additionally, using a VPN means encrypting your messages, so no one can understand what you're up to even if they intercept your message. How do VPNs work internally?# At their core, VPNs are all about infrastructure - networks of servers that user data is siphoned through. These can be very small or massive, sometimes spanning hundreds of countries. How they work differs wildly from one service to another. Some offer lots of different apps and features (undertaking the additional work that implies), whereas others stick to the bare-bones servers only approach. The contrast is between two options: custom apps and bare-bones solutions. Custom apps# Most of the commercially-successful VPN services offer custom apps. That is to say, the user connects to the VPN server network via apps made by the VPN provider, rather than third-party apps (such as, for example, the open-source OpenVPN app). Doing this has certain benefits: branding opportunities, freedom in terms of the type of features you include in the app, a custom interface, the ability to track individual user behaviors via the app for optimization, etc. However, it's a lot more heavy on the service provider's resources: the apps (and features) need to be developed, patched continuously, and so on. The burden increases even further with every new device type you decide to target with an app. Some VPNs have applications not merely for the big 4 (Windows, macOS, iOS, Android), but also Linux, browser extensions for Chrome/Firefox/Safari/Opera, Amazon Fire TV, Kodi - the list goes on. Bare-bones solutions# Some providers have a network of VPN servers, and that makes up most of their offerings to customers. In this situation, you will use a third-party (e.g., OpenVPN) app to connect to the provider's VPN servers. The second approach is a lot less resource-intensive, but by choosing it, the provider forfeits all the benefits as well. How do VPNs differ from other solutions?# Contrary to proxies (some of which can be used for privacy or to beat online censorship), VPNs cover your entire internet connection, not merely your browser or torrent client. Unlike Tor, (good) VPNs are paid services, but they are also a lot more universal in terms of what you can use them for, and they offer much better performance. Smart DNS tools don't make your performance suffer when streaming Netflix, but they neither obscure your IP address nor encrypt your data either. In short, a VPN may not be the best for each particular application, but it is the most comprehensive and universal tool. Why did you decide to start running VPNpro?# Accurate and useful information about VPN services is scarce, whereas the importance of VPNs and the market as a whole keeps growing. It was already the case when VPNpro opened shop, and it's even more real today. What next?# There's still plenty to do with VPNs. The industry is dynamic and booming. With that said, we have already started covering some of the other pieces of the cybersecurity puzzle - password managers, for example. What does the future look like for VPNs? Can you see any particular trends?# Their importance can only grow. Privacy on the internet is becoming ever rarer, and VPNs can counteract this trend to some extent. There are also dangers to be wary of - popular VPN services being owned by fewer and fewer companies, governments using VPNs to gather user data, and so forth. Conclusion# For a web developer, it's interesting to understand where all the traffic is going and how. Understanding VPNs is one of the keys to this and I believe the usage of the services will go up as people become more privacy and security minded.

Squareboat - Growing an IT Business - Interview with Gaurav Gupta

What is it like to grow an IT business? At least in my experience, it's not the easiest thing to do, and there are many things you have to get right to stay floating. To learn more about the topic, I am interviewing Gaurav Gupta about his company, Squareboat. Can you tell a bit about yourself?# Gaurav Gupta I am a tech entrepreneur who wants to use technology to make the world a better place. After spending a big part of my life in the industry, I feel that an understanding and control of technology is the ultimate super-power that anyone can have. I have worked with some of the world’s biggest companies for over 12+ years, immersed in developing and maintaining vast and scalable web and mobile applications end-to-end including product conceptualization, design, backend and frontend development, deployment, server management, uptime, reliability, performance, and scalability. With Squareboat, I am working towards my goal of adding value to the lives of people who use the products developed by us. How would you describe Squareboat to someone who has never heard of it?# Squareboat is an Internet startup with incredibly talented people working together under a single roof. We build beautiful, scalable, and feature-rich web and mobile applications. Some of the world's best companies trust us for making their web and mobile applications. What is the business model of Squareboat? Has it changed over time?# Our business model has been the same from day one. We build mobile and web applications for our clients and hopefully eliminate the need for our clients ever to hire or feel the need to hire their tech teams. How does Squareboat differ from other vendors?# I think our most significant difference is our ability to understand complex system requirements and present them to our end users in a clean, simple, and understandable way. We’re a big believer in making clear products, have a great UX, and can be understood at a glance. Why did you start Squareboat?# It was clear to me at an early stage that the Internet is about to revolutionize every business. It was also evident that the average software company in India is simply not capable of delivering to the needs of these new-age startup founders. Doing this requires not only excellent technical capability but also someone who can speak to them in their language, understand their business goals, and then architect products accordingly. Squareboat was formed precisely to solve this need. What next?# Hopefully, an even bigger team, more products, and many more success stories! What does the future look like for Squareboat and web development in general? Can you see any particular trends?# The future holds immense opportunities for Squareboat. Technology is changing rapidly, and so are we. Looking at the trends, we can anticipate that the diversity of new platforms is going to take software development to another level. Well, whatever the future may be, Squareboat will always keep ruling the roost and deliver the best experience to everyone. What advice would you give to programmers getting into web development?# Keep your basics and concepts clear because technology keeps changing now and then. Choose projects/practice over theory. Any last remarks?# To conclude, I would like to say that technology has become the backbone of humanity, and it is essential for people to understand both its pros and cons. As a founder of a tech company, I am excited about the possibilities that the future holds for us, and we as a company are up for the challenge to continually remain the leaders in the space and impact millions of lives with our beautiful and impactful products. Conclusion# Thanks for the interview, Gaurav! The most exciting part of the technology market for me is figuring out ways to deal with the pace of change. The way we develop keeps changing rapidly as new technologies appear, and as an entrepreneur, it's a challenge to keep up.

Tomo - Like CRA but more flexible - Interview with Jason Wohlgemuth

Although tools like create-react-app (CRA) are great, they can at times be inflexible. They might provide great defaults but at the same time you lose some power and room for experimentation. To learn more about an alternative tool, I'm interviewing Jason Wohlgemuth, the author of Tomo. Can you tell a bit about yourself?# My name is Jason Wohlgemuth. I am a principal software engineer working for BAE Systems in Omaha, Nebraska. I am a Jesus follower, husband, and father of 3 boys (two of which are identical twins). I am a progressive polyglot that prefers the front-end and likes to think that I can improve just about anything using science and software. In my free time, among other things, I want to make stuff...like tomo. How would you describe Tomo to someone who has never heard of it?# Tomo is a friendly command-line tool that you can use to quickly craft and customize sustainable software using the latest and greatest web technology. Not sure what build tools to use? Great. Tomo allows you to compare apples to apples and go deeper after you have made an informed decision. As an example, you can use tomo to scaffold a React web app that builds with webpack. Then you can swap out webpack for Parcel and quickly compare the benefits of both (hint: Parcel HMR is much faster for small projects). By the way, HMR (or live-reload for Marionette.js based apps) is included by default. Tomo can scaffold new web apps, native apps, libraries, and servers and can add testing, linting, CSS-support, and more to existing projects. How does Tomo work?# Tomo starts with an intuitive command-line interface. Don't know what to make? Use Tomo's menus. Make a mistake while typing? Take comfort in Tomo's friendly automatic error correction suggestions. Tomo is built with Node.js and leverages the React API using vadimdemedes/ink. Behind Tomo's declarative UI is a collection of modular commands (like "create source directory" or "add ESLint support"). The UI pieces together the desired commands to be executed and keeps the user updated as the process progresses. The index file for Tomo's commands reads like a checklist for building things. In short, the UI is rendered declaratively using ink, and the commands are executed using Sindre Sorhus's library, execa. How does Tomo differ from other solutions?# Tomo does not seek to hide the complexity of modern web technology build chains - Tomo does not need an eject command. Tomo does not try to replace your build chain either. It allows you to build with what you know, compare what you do not know, and augment what you already have. From my personal experience exploring the web, I have not found such a tool. With Tomo, you can easily compare apps with nearly identical architecture and run them side by side - one with webpack and one with Parcel. Tomo allows you to create web (and native) apps using Marionette.js. You can create apps with Marionette.js or directly compare "template interpolation rendering" (using Marionette.js) and "component rendering" (using React). Finally, Tomo can augment existing projects. That is, tomo can do things like add ESLint support, switch out Webpack with Parcel (and vice versa) and add integration testing with Cypress. Being able to create, add, and remove makes tomo very versatile. Tomo can be used for exploring new technology and exploiting known technology. Why did you develop Tomo?# Tomo is the spiritual successor to a Yeoman generator I made a couple of years back called generator-omaha. My "omaha" generator allowed one to make various choices of technology and create projects, web and native apps, and servers. The generator had a hard dependency on yeoman, limited UI options, and did not cleanly allow one to augment existing projects. I decided to re-write the generator and eventually I decided to re-write it from scratch, not as a generator, but as a stand-alone CLI app. I had taken an interest in "React for the CLI" and had seen the success of projects like emma-cli. Furthermore, I wanted to have finer-grain control of the UI and desired a pleasant DX while building it...this is why I decided to give ink a try. Around that time, I found myself in Africa for a year at the behest of the United States Navy. With nothing but my wits and my trusty Surface Pro 6, I spent most of my year creating what I now call tomo. More generally, tomo (and its predecessor, omaha) is how I codify my learning. That is, every pattern and best practice I know is encoded into Tomo. You could say tomo is my dynamic digital living codex of web tech knowledge that I use to build, share, and remember. Lastly, as I found myself dipping my toe in other languages such as Rust, ReasonML, ClojureScript, and Elm, I had the need for adding CSS support (tomo add postcss)...or the desire to integrate ReasonML into an existing JS project (tomo add reason). What next?# That is a tough one. As for Tomo improvements and new features, I catalog my plans on a public Trello board. Other than new features, I intend to make things with tomo to prove its worth (both to me and others). I will continue to update Tomo as I learn and will use it in my projects. A couple of stand-out features high on my list are enhanced native app creation, stdin support for using config files to run tomo, and codemod integration for doing things like transitioning from AMD to ES6. My long term vision is to include Tomo as part of a more significant effort to create a voice-enabled emotive digital assistant. I have a multi-tiered plan for getting there, but the next step might be adding voice support to tomo. The general idea is to have tomo running and listening to your voice as you develop. During your development, tomo would respond to commands like "lint this file", "run my tests", or "create a server". Unrelated to voice support, I just added tomo new app --with-cesium [--use-react], so I will probably be making an app using CesiumJS, resium, and Uber's H3 soon...ish (tomo makes it really easy to deploy a Cesium app to surge or now :wink:) What does the future look like for Tomo and web development in general? Can you see any particular trends?# Tomo's future is bright: as long as I am making things with web technology, I will be codifying my knowledge into tomo. Web development, in general, is a horse of a different color. AMD seems dead, WebAssembly will continue to gain popularity and React will stay relevant and remain more than a UI library. It is a pattern that crosses the language barrier (ex: reason-react and reagent) (note: I love React hooks). As far as what I would like to see - I am a big fan of #usetheplatform (#usetheplatform is a driving force behind tomo). That is, tools like Babel and frameworks, in general, are like scaffolding around the building that is your web application. As the "platform" (browser) evolves and gains functionality (like web components), the scaffolding will be removed. Maybe one day, Babel and PostCSS will no longer be needed (frameworks in one form or another will probably persist). With the availability of universal support for "module type" scripts, even bundlers may one day be a thing of the past (especially if projects like pika have anything to say about it). I really want to believe web components have finally arrived, but I have had that thought several times in the past few years...despite my pessimism, I love lit-element and even integrate lit-html into tomo Marionette.js apps (tomo new app [--native]). Finally, I LOVE functional programming. I hope it continues to gain traction (neither of us has the time in the day for me to convey all my thoughts/emotions on functional programming 🤓). What advice would you give to programmers getting into web development?# Other than "use tomo"? 😁 I might say something like: Become comfortable with math and try to stay "implementation agnostic". It hurts my heart when I read articles that espouse beliefs like "math is not important for web development". Perhaps you can avoid math if you stick to making simple landing pages with GatsbyJS or so. If you are going to make things of notable complexity or going to move the field forward, you will need math. Now, I do not mean to say a Masters in Mathematics is the cost of entering the field of web technology. Whether it is purposeful or otherwise, good developers use math regularly. Example: Have you ever seen something like !(!A || !B) and re-wrote it as A && B? It is called De Morgans laws, and I use it at least weekly. I am aware that this may not be a pleasant topic for some, but as a lifetime lover of math, to me, it is just spreading the joy of the "...most colossal metaphor imaginable" 😁. Also, the functional programming paradigm is based on concepts such as lambda calculus so...math. By "implementation agnostic", I mean to say avoid being an "angular developer" or even a "react developer". Instead, strive to be a "web developer that understands the underlying concepts of [angular|React|whatever] and is familiar with its usage]". If you discover the underlying patterns of widespread technology, you will find that the web technology deluge is not as deep as some might lead you to believe. Who should I interview next?# I have three suggestions: Chris Maltby - He created GB Studio, an unbelievably cool project that allows one to create Gameboy games! GB Studio consists of an Electron game builder application and a C based game engine using GBDK. Super fun. I would love to hear more about its creation and development (and want to see it gain popularity). RangerMauve - RangeMauve created a local-first-cyberspace manifesto. I would be interested to hear more on the topic from the source. Jacob Jackson - He created an intelligent code completion engine using a deep learning model trained on GitHub data. It is available as extensions for Atom, and VSCode called TabNine. It is written in Rust. I would be interested to know more details on how he made it and what he sees in store for the future of TabNine and deep learning-powered code tools. Any last remarks?# Standards matter. Pick one and stick to it. This is the ESLint part of mine. My favorite functional programming quote is by Michael Feathers: Objected oriented programming makes code understandable by encapsulating moving parts. Functional programming makes code understandable by minimizing moving parts. My favorite comment about web tech is from Jon Schlinkert: I guess we're just stuck in this downward spiral of continuous improvement. By the way, how's that job search for a dedicated Makefile engineer going? Most of all: Thank you! I sincerely appreciate the chance to speak on some of my favorite things. Conclusion# Thanks for the interview, Jason! It's great to see when developers create tools for their own need out of sheer passion for creation and codify what they've learned. Learn more about Tomo at GitHub.

Secure Coding - Interview with Liran Tal

There's more than one way to code, and you'll find multiple programming styles. One aspect that's perhaps neglected sometimes is secure coding. To learn more about the topic, I'm interviewing Liran Tal, a security expert. Can you tell a bit about yourself?# Liran Tal Hi! I’m from Israel, married and father to 5 years old Ori Tal. I’ve been dabbling with software and open source since elementary school, mostly through the FOSS movement around the Linux OS. I was drawn to information security, through-out my childhood and adult life. It’s always been an on-and-off thing until recently. For example, I authored a book about Node.js Security, and work with the Node.js Foundation’s Security working group. These activities led me to consider a Developer Relations role at Snyk seriously. Ultimately I made a move from a software developer and team lead to security full-time. How would you describe Secure Coding to someone who has never heard of it?# Taking into account the security aspects of the code we write is as important as ensuring it's performant and bug-free. Secure coding contributes to the overall quality of our software. Similar to other doctrines, to write secure code, we follow best practices, secure conventions, and standards. This way, we ensure that the code we write is following standards and free of security bugs. Why is Secure Coding important?# As a developer, I naturally relate a lot to secure coding practices because, as developers security vulnerabilities that manifest due to security bugs begin with us. The state of open source security report from 2018 revealed the median time of a security bug from introduction to discovery. Based on the study, it takes no less than two and a half years! How does Secure Coding differ from other forms of security?# There are indeed so many forms for security, as well as verticals in which security kicks in. Good examples are network security or application security. The common dominator, however, is software because the software is eating the world and so just like Software Defined Networking, many other technologies will eventually become software-based. When we zoom in on the software development lifecycle, security should be embedded throughout the entire lifecycle from planning and design to development, testing, and production monitoring. Secure coding is that first phase in that lifecycle where planning meets implementation. OWASP has a significant number of resources, one of which is the Secure Coding Practices reference that would serve up as a good starting point, to begin with. What does the future look like for Secure Coding and web development in general? Can you see any particular trends?# I believe that we are going to see a lot of automation and developer-empowered workflows and tools. These will help us make sure that we bake security into the development process and not treat it as an after-thought. The trend makes a lot of sense because of the scale that we’re facing. There are a hundred developers for every security person in an organization. The situation is hard to scale, and it's impossible to run manual reviews per commit which get deployed quickly as we also embrace CI/CD and DevOps. Because of this, we need excellent security automation tools to help us realize good security in our applications. What advice would you give to programmers getting into web development?# For anyone getting into software development, I’d recommend to unlock yourself from the chains of frameworks or keeping up with trends. Focus on building things you are passionate about and challenges that fuel your brain. Connect with communities and colleagues so you can enrich each other with knowledge and confidence. As you are making your way in software, take the time to study essential software development skills. These include the essence of writing tests, the art of debugging, and the importance of software security and the principles of security best practices. One of these communities is the secure developer where we feature Jim Manico among other great speakers and AppSec influencers to support developers. I take part in this community and invite you to join as we're running webinars, a newsletter and a Slack group for security-minded developers. We cover secure coding practices and all-around web security topics. Who should I interview next?# @BenedekGagyi or @AlyssaHerrera\ are going to have interesting stories which I’m eager to read about! Any last remarks?# Open source is fantastic, and we’re at exciting times in software and technology in general. Take a deep breath and jump in! Conclusion# Thanks for the interview, Liran! As a topic, secure coding is one of those techniques that often gets overlooked and I feel there's a lot for many developers to learn. I can't wait for more tooling to appear in the space to help us develop robust, more secure software.

Experiences on CSSCamp and JSCamp 2019

I was invited to CSSCamp 2019 (17.07) and JSCamp 2019 (18-19.07) by David Pich, the main organizer, to help out. I performed speaker interviews and a part of the Twitter coverage while documenting the event a bit. Overview of CSSCamp and JSCamp 2019# Hello from CSSCamp or JSCamp CSSCamp and JSCamp were held as a dual conference in Barcelona, Spain. CSSCamp was held for the first time while JSCamp was the second time in a row. Before that, there used to be a year of AngularCamp before the event was rebranded last year. In total, the events attract roughly 600 developers, and especially the conference portions seemed close to sold out although there was space in the workshops. I know from experience selling can be unpredictable, and many things can go wrong. Preparing for the speaker interviews The Venue# Especially the lighting of the venue was impressive The event was held at Auditorio AXA, the same venue as last year. Although not the most spacious, it's one of the more attractive conference venues I've seen, and it fits the purpose quite well. Capacity-wise the conference cannot grow from this within the same space, but that said, the scale seems fine even now. Sometimes events become too big, and as a result, the atmosphere suffers. The Workshop Days - 15-16.07# Coffee at the workshops The conference began on Monday with a workshop by Kyle Simpson. He held another one on Tuesday, and that's when I joined the event. On Tuesday, Sean Larkin and Harry Roberts held theirs in addition to a second workshop by Kyle. The workshops were held in a full-day format. I cannot comment on the quality of the workshops as I didn't attend any this time. Learning hard at a workshop CSSCamp - 17.07# CSSCamp was held right after the workshop days. It was targeted towards both designers and developers, and I feel it made the whole conference more diverse than the last year. The day started with a keynote by Tatiana Mac, where she discussed how to build socially inclusive design systems. It's easy to forget the context in which we design, and by understanding our biases, we'll be able to create more accessible designs. Tatiana Mac delivering her keynote After the keynote, Jason Pamenthal followed with a technical talk about variable fonts and their potential for the web. Variable fonts move away from the traditional thinking of fonts where you have multiple separate variants of the same typeface to choose from. Instead, we have a single typeface we can alter on various dimensions using variables, hence the name variable fonts. Cassie Evans talk showcased how to use SVG for animation on the web, and it was well-received by the audience. I think there's a lot of potential there we are still missing as developers. The funny thing is that as a specification SVG is old but we still don't use it to its full potential. It's nice to see SVG receiving more attention as especially now with high resolution displays using it pays off. Cassie Evans at the makeup lady - each speaker received the treatment Oliver Turner continued on Houdini, the upcoming standard that's going to change the way we write and think about CSS. The talk was valuable as it showed perhaps a glance of a future where innovation in CSS is driven by the community, not by the standards bodies. Oliver Turner captured from the backstage After the lunch break, Stéphanie Walter dived into design techniques that let us improve perceived performance. Although actual, absolute performance is essential, by understanding a bit of psychology, we can create applications that feel faster by designing the right way. Adam Argyle's following talk went into the opposite direction, claiming that sometimes our user interfaces are too fast. By slowing down the animations, you create an entirely different perception of the application. The idea is related to increasing padding on a site as having more room creates a distinct impression. In his talk, Sergei Kriger discussed how to make asynchronous user interfaces accessible. Many applications are like this, and it was interesting to see how you might perceive such an interface from an accessibility perspective. It's an aspect I hadn't thought about before. Harry Roberts dug into the concept of resource hints. They are asset loading annotations that tell the browser something about your preference but doesn't force it to do anything. That's why they are called hints. Perhaps the most known example is rel="preload" as you can use it to tell the browser to load assets even before you need them to speed up subsequent usage of your application or a site. Lara Schenck took a refreshing look at how to write and test CSS. Her thesis was that you could treat CSS as a programming language. Especially the trendy topic of utility classes taps into this. In the approach, you wrap specific functionality behind a particular class and then combine the result to create the user interface you prefer. Although the method requires effort when you construct the classes, now you have something you can share between views, and most importantly, to test. Testing CSS is an underappreciated topic, and it was great to see an approach that lets us do that. JSCamp - 18.07 - Day 1# Kyle Simpson preparing for his keynote After the CSSCamp day, it was time for JSCamp and its two days. The first conference day began with an 80-minute keynote by Kyle Simpson. His main point was that we should rethink the way we develop our sites and applications. Although ideas like progressive enhancement were valid in the past, we need a bit more. Kyle's thesis was that we should let the user define which fidelity they prefer by using a quality metric. The implementation can be a slider that allows the user to choose the version of the site they prefer. To implement the scheme, we'll have to change our thinking and make our work more composable. We have to begin to think in terms of feature sets and different versions of a feature. The idea would push the control over the quality of service to the user from the developer end and shift the focus. Jenn Creighton's talk continued on the topic of iterators. She held the presentation in a refreshing "choose your own adventure" format that made it a bit more interactive while discussing the issue. Iterators are behind many of the basic structures of JavaScript, and understanding them lets us access some of the power introduced in the new features of the language over the past few years. Sigurd Schneider, a part of the V8 team at Google, dug into the internals of V8. He gave an in-depth talk on many of the features that aren't discussed so often. In terms of memory management, weak collections seem like a way forward. I expect most JavaScript developers won't end up using them, and it's a feature for especially library authors to leverage. After a lunch break, Maya Shavin discussed the rise of CSS-in-JS. Overall, it was an excellent introduction to the topic, and I'm definitely in favor of the approach as it lets me develop component libraries effectively. The utility class movement is parallel to this as it addresses the same concerns of CSS maintainability. Adam Argyle's second talk dug into pwototyping. By this, he means using modern PWA approach to prototype a high fidelity mobile application fast. It's achieved by using a set of techniques that let us deliver high performance. The idea is not to develop the functionality but rather to envision what it might look like to a potential investor or a client. The approach optimizes development speed and fidelity in favor of architecture and other concerns. The point is to use a spike like this to sell the project which you then implement properly. Shawn Wang continued on the idea of JAMStack. His thesis was that JavaScript is eating the world of static site generation, and this, in turn, is affecting the way we approach even large content-driven sites. As time goes by, it becomes possible to push more and more functionality into this realm while gaining many benefits of static websites. I'm a proponent of the approach, and I've seen its power in practice as I've developed my sites. Especially creating your content API and then driving site generation from that is powerful, and I believe there's still more to come as technology evolves. JSCamp - 18.07 - Afterparty# Sala Bikini The afterparty of the event was held at Sala Bikini right underneath the main venue. Space itself was great although the party may have been too late for people to join. I decided to turn in early to prepare for the last day of the conference as it had been a rough week already. Partying at afterparty One of the main attractions of the afterparty was a synthpop band. Although I missed the show, I had a chance to discuss with the band during a speakers' dinner and they seemed like fun people and I bet the gig was great. Preparing for the gig JSCamp - 19.07 - Day 2# There was something interesting to look at but I don't know what Day two of JSCamp started with a talk about accessibility patterns by Garance Vallat. She included many practical examples, and it was an informative talk to follow. For me, as a conference organizer, it would have been interesting to pair the presentation with Sergei Kriger's one as then you gain an enhanced perspective on the topic. Ruben Bridgewater's talk discussed the topic of error handling. I think it was an excellent topic for the conference often given JavaScript developers don't provide a lot of thought to it. I should leverage inheritance to create a hierarchy of Error classes, so when I raise an error, then I get its category straight out of the box. The other learning was that it makes sense to have a layer where you at least capture and log possible errors. When discussing with him, I learned he is in favor of letting applications crash fast as at least then you have a clear state which to fix. Based on that, I would say it's essential to take care of your monitoring, so you fix potential runtime issues as you notice them. Sean Larkin's presentation delved into how Microsoft makes applications fast. It makes sense to have a protocol on how you track performance issues and document what you are doing so other people can learn as well. Sometimes the fix itself is trivial. To fix something, you have to be aware that something was wrong in the first place. It's one of the points where monitoring what you and your users are doing can help. Paul Verbeek-Mast's talk went into the evolution of dev tools. It's an underappreciated topic, and it was great to see how much the tooling has evolved. I learned that Firefox has particular functionality in its tooling to visualize flex and grid while Chrome tooling seems better for tracking performance issues. After the lunch break, Vladimir Agafonkin, the author of leaflet and many other popular libraries, discussed algorithmic performance and its implications in practice. It's one of the things many people learn during their time at university. Given there's a large number of developers that don't understand the topic, it was great to see the talk. It was particularly instructive to see practical examples of speedups whereby first understanding the problem, Vladimir was able to speed up the code significantly. It can mean your code won't look as clean after optimization, but that's the tradeoff to make to cut algorithmic complexity. It's an especially valuable skill to have if you work on code that requires high performance. Rich Harris' talk was about Svelte. It's a web framework using a compiler-heavy approach. Compared to libraries like React, the amount of code you have to write is significantly smaller, and that was also the central thesis of Rich's presentation. By writing less code, you end up writing fewer bugs. The declarative approach provided by Svelte is refreshing. You can learn more about an earlier version of Svelte at the interview. Henri Helvetica ended the conference by discussing the evolution and the shape of the web. It was more of a philosophical talk that put what we are doing as web developers into a context. As web developers, we have a great responsibility in how the web develops. We should take good care of it, so it's there for future generations to use as we did in our time. Did you know Helvetica is Henri's stage name? Conclusion# At Tibidabo, one of the speakers' dinner locations For me, both CSSCamp and JSCamp are reliable conferences. They seem to pull the local communities together and compared to the last year, and I think the audience was more diverse. It was a good move to attract more designers to the events by having the CSS theme included. The venue itself is excellent although now it's running at its limit meaning the conference cannot grow further. If you want to experience Barcelona in Summer, it's not the worst idea to visit the conferences. The content is good, and the city is fantastic. I can't wait to see what David has in mind for the next year. You can find my conference photos online.

Sketch.sh - Interactive ReasonML sketchbook - Interview with Nguyen Dang Khoa

Often learning a new programming language like ReasonML can be an arduous process as you have to set up the environment and tooling before you can begin learning. That's where Sketch.sh by Nguyen Dang Khoa comes in. It's an online service addressing the problem for Reason. Can you tell a bit about yourself?# Nguyen Dang Khoa Hey. My name is Nguyen Dang Khoa. I'm a freelancer from Viet Nam. I usually work on React.js/Node.js/ReasonML projects, and I built Sketch.sh. How would you describe Sketch.sh to someone who has never heard of it?# You can think of Sketch.sh as a notebook with an integrated code editor. You can execute ReasonML/OCaml code and get the results immediately. How does Sketch.sh work?# OCaml has its own virtual machine called ocamlrun for executing the bytecode produced by ocamlc. Here is an overview of OCaml's compilation targets: OCaml compilation targets js_of_ocaml is an OCaml-to-JavaScript compiler. Sketch.sh works by using js_of_ocaml to compile ocamlc and ocamlrun to JavaScript and then executes user's input code by calling these tools. How does Sketch.sh differ from other solutions?# Some solutions like repl.it, nextjournal offer the same functionality, but all the code execution is done server-side. They developed a special server infrastructure to handle this kind of work. With Sketch.sh, everything is executed inside the user's browser. Sketch.sh's server is used for saving, handling authentication, and many other things. Why did you develop Sketch.sh?# ReasonML is a fully typed language with a strong inference engine so you rarely see any code annotations. This is a big problem when helping others with their bug/coding issues; I have to open the terminal, run the compiler each time. I created Sketch.sh to reduce the friction when asking questions about ReasonML. Currently, the process looks like this: Hey, I have this coding issue. Here is a Sketch with runnable code, could you help me fix the bug? Here is a new Sketch with that bug fixed. How cool is that? What next?# I'm currently working on supporting BuckleScript and ReasonReact as well as improving SEO, discoverability of popular sketches. What does the future look like for Sketch.sh and web development in general? Can you see any particular trends?# Becoming a developer is getting more comfortable every day since there are tons of tools that help you learn about web development without leaving the browser. I hope that Sketch.sh will be part of that trend for folks who want to learn about ReasonML and web development in general. What advice would you give to programmers getting into web development?# Learn the basics first, JavaScript, HTML, and CSS. It will benefit you in the long run. Frameworks come and go, only the languages (JavaScript) stay. If you know JavaScript, you can adapt to any frameworks (React, Angular, Vue, Ember, and so on). Who should I interview next?# Leandro Ostera: I love his Reasonable Coding stream and his works on improving ReasonML documentation foundation. Bryan Phelps: He's building oni2. Ulrik Strid: He's a wizard when it comes to setting up Azure pipeline and automating the CI process. Andrey Popp: He's the main developer of esy - ReasonML native package manager. I can't imagine Sketch.sh without it. Any last remarks?# If you're interested in Sketch.sh or working on some challenging UI, come around and say hi at @sketch_sh. Conclusion# Thanks for the interview, Nguyen! I think Sketch.sh is a great boon for people interested in Reason and also for those who want to support people learning it as it allows you to illustrate the usage of the language in a convenient manner. You can try Sketch.sh online and also check out the code at Github.

Experiences on WorkerConf 2019

I was invited to WorkerConf 2019 (27-28.06) as a speaker. I also participated a workshop and went hiking after the conference. It was a small-scale conference (about 80 people) held in Dornbirn, Austria. It was my second time in the city and I enjoyed especially exploring the alpine scenery and the nearby cities on my way to the yearly webpack meetup held at Munich. Overview of WorkerConf 2019# Dornbirn at night WorkerConf was held second time in Dornbirn. It's a sibling to Agent Conf by the same organizers. While Agent Conf is about the frontend and skiing, WorkerConf has focus on the backend and Summer activities such as hiking or swimming. Both events have workshops available and I think it's a good idea to combine conferences with something more than just technical content as it allows you to experience the region and the culture while making your trip more complete. The Workshop Day# Tractor at Dornbirn I took part in the Node.js and Fastify workshop by Matteo Collina and Tomas Della Vedova. The half-day workshop (4 hours) covered the Fastify web framework in detail and gave the participants a good idea of its capabilities and approach. The instructors were knowledgeable but I would have enjoyed more relaxed pace by having an entire day to dedicate to the topic. Often there are technical issues to debug and a day format gives more room for discussion. That said, I think Fastify is a great framework to study especially as a replacement to the aging Express. The promise of high performance combined with a robust plugin architecture make this forward-looking framework a winner in my eyes. The Conference Day# Atmosphere As it's usual for smaller conferences, WorkerConf ran in a single day format. The conference day was long as the program begun 9:00 after a brief breakfast and ended around 20:30 after the last talk. There were thirteen speakers in total and the talks varied from more general ones to technology specific ones. The Venue# Waterguns! The organizers were rather unlucky with the weather as the event was hit by a heatwave. The small venue (Plattform_V) and its air conditioning couldn't keep up with the conference and I prefer the nearby venue used for Agent Conf instead. The organizers arranged refreshments and there was plenty to drink so that helped a notch. The screen of the venue was quite nice (limited to 720p) but unfortunately a part of it was obscured by the pedestal setup making it difficult to follow the presentations from the side at times. That's more of a limitation of the space, though, and it wasn't a major annoyance. The catering for the lunch and the dinner was provided by a food truck parked nearby. The venue itself had occasional snacks and overall the food had better quality than most conferences I've been to. Likely the venue would have worked much better if the weather hadn't been so hot. Now a large portion of the attendees had left the venue well before the last talks. 13 Speakers, Broad Web Topics# WorkerConf In total, the conference had thirteen speakers with varying web-related topics. The topics were well organized and there were plenty of tidbits to learn even for an experienced practitioner. The breaks were long enough and gave a good chance to escape the heat a bit. Ultralight Java Microservices with GraalVM - Thomas Würthinger# Thomas Würthinger Although I don't develop Java, there are lots of technologies that emerge from the space. Oracle's GraalVM is one of those. I wasn't familiar with the virtualization solution before so it was nice to get an introduction to the topic. The main thesis of Thomas Würthinger was that there's a gap between interpreted and compiled languages. Both have their strong sides and it's possible there are some ways to bridge this gap over time. At the moment you'll have to compromise between the characteristics you want. The interesting thing about GraalVM is that the environment has been to designed to run a lot of different languages and even to mix them. You could run Java inside JavaScript and vice versa if I understood correctly. This means you could implement portions of an application in the language best suited for it. JavaScript Class Features: A case study in TC39 - Dan Ehrenberg# Dan Ehrenberg Dan Ehrenberg from TC39 discussed the standardization process to show people how JavaScript evolves through the influence of the committee and people taking part in its operation. You don't have to be a member of TC39 working group to affect the evolution of its language as their work is public on GitHub. When it comes to decorators, Dan's main topic, it was interesting for me to see how the community forced the hand of TC39 to standardize the feature by first having unofficial implementations of the requested feature in the wild. The rising popularity of the feature meant it will most likely have to become a part of the official standard. JavaScript - Quo Vadis - Juho Vepsäläinen# Quo Vadis I gave my talk about the future of JavaScript. I had given the talk a few times before. The first time I did it was in 2016 so it was interesting for me to go through the slides and update everything. Many of the original points still stood while a few new ones had appeared. It feels like the ecosystem is in a constant motion. The current struggles seem to center around package management (npm) and the increasing popularity of typing (TypeScript). My expectation is that we'll see progress in these fronts during the next 5-10 years as the community will adapt both to use distributed alternatives for package management while adopting typing even in larger scale. It's likely the awareness on the value of typing will lead people to new ecosystems. It feels like ReasonML is in the same place where TypeScript was five years ago while it was still a young language. I don't expect the Reason to become mainstream but I think it will contribute a lot to the development of JavaScript and its ecosystem by showing us better ways of developing while innovating in the space. You can find the slides of my talk online. Visualizing cloud architectures in real time with d3.js - Julie Ng# Julie Ng Julie Ng's talk focused on her experiences around cloud architectures and microservices in particular. Her talk was a great tutorial to the topic and the challenges related to developing applications this way. She built a visualization based on the popular D3.js to illustrate how a fault on a graph of microservices would cascade. The point was that it's a debuggable option to standard dashboards that show only numbers without telling much about the exact faults. Next.js: The React Framework - Tim Neutkens# Tim Neutkens Tim Neutkens' presentation gave an overview of the popular Next.js framework for React. As a recent user of Next.js, it helped me to understand its capabilities and future better. The framework has been progressing furiously over the past few years so it was great to see the vision for it. Server-Side Rendering using Nuxt.js - Vanessa Böhner# Vanessa Böhner Vanessa Böhner's talk complemented Tim's talk by discussing Nuxt.js, a Next.js inspired framework for Vue.js. She explained how she approached developing a site for her podcast under a strict deadline. It was cool to see the contrast with the React approach. Speeding Up React SSR with ESX - David Mark Clements# David Mark Clements According to David Mark Clements, if React was invented today, we wouldn't be using JSX. Instead, we would rely on ES6/ES2015 template strings. And that's what he achieved in by developing esx. esx lets you write JSX within template strings. The great benefit of the approach is that it can speed up server-side rendering by avoiding work through caching. The more complicated your application is, the greater the potential gains. The best thing is that to get the benefits, you don't even have to use esx as it's possible to precompile JSX to esx without having to change the way you write code. It's good to note approach is still experimental, though, and you should test the behavior and output carefully in a production environment. My hope is that the ideas explored by David find their way to React code eventually as then a large community could benefit from the performance benefits. JavaScript: The fairly odd parts (explained) - Benedikt Meurer# Benedikt Meurer Benedikt Meurer, one of the organizers of the conference, gave a lightning talk on fairly odd parts of JavaScript. I saw the original version of the talk in YGLF Kiev 2018 and now Benedikt dove into the details. Overall it was an entertaining bit and a great addition to the conference by one of the authors of the legendary V8 engine. Benchmarking inside and out - Tomas Della Vedova# Tomas Della Vedova Tomas Della Vedova, one of the authors of Fastify, discussed his approach to benchmarking. The main points for me were that before measuring, you should make sure the JIT is warm by performing a few runs before starting the main ones. You also have to be careful about what you measure. You may end up measuring something entirely different than what you want if you don't think about the test setup. Automating Your Vulnerabilities Away - Tierney Cyren# Tierney Cyren Tierney Cyren from Node.js discussed security of npm and applications built on top of it. You should maintain a cached version of the registry to protect against catastrophic failures from registry side. It's also important to keep track of the current versions of the packages and leverage tools such as npm audit. You should also have a process for dealing with security issues so if something happens, you are at least ready instead of having to improvise. Given Node itself can have security issues, you should have means to get notified so you can update as needed. Reason: A ML language for the Masses - Patrick Stapfer# Patrick Stapfer Patrick Stapfer, the founder of Reason Association, discussed Reason language and why it's finally ready for the masses. He gave an overview of the capabilities of the language and after that he gave a live demonstration showing the value of the type system. I think Reason is starting to get to a good place and you can already see especially product houses adopting it for their own development. Stream into the future - Matteo Collina# Matteo Collina Matteo Collina, one of the authors of Fastify and the maintainer of Node.js streams, discussed the complexity of streaming. Handling streams has always been challenging in Node.js and it's even more complex than I thought. Fortunately Matteo is doing his best to improve the primitives while avoiding breaking too much in the process. One of the hard parts will be remedying the differences between the browser and Node.js streaming so APIs like fetch work as we expect. To make things worse, using promises brings its own problems. As it happens, generators might be the right primitive to use with streams instead of what we've seen so far. Security: the serverless future - Olga Skobeleva# Olga Skobeleva Olga Skobeleva discussed how Cloudflare is moving towards edge computing and what it means in practice. Compared to what we had before, pushing computation close to the consumer seems like the next natural step as it means in some cases we can avoid a round-trip to the server. The approach is new still but there's enough capability to experiment with. To demonstrate, Olga showed how to develop two-way authentication using a traditional password combined with a message sent to Telegram messaging platform to confirm login to a service. Hiking Day at Latschau# Hiking at Latschau The highlight of the conference for me was hiking on Saturday at Latschau. It's simply amazing to go up and down the hills. The views are amazing and there's even snow. Best of all, it's not too hot although you have to be careful with the sun. I wouldn't mind hiking in the mountainous region of Austria more. The experience alone made the whole trip worth it. There was one more activity day on Sunday but I didn't have enough time to stay for that as I went to Munich to run the yearly webpack meetup. Conclusion# If you don't like clowns, don't visit Bregenz Although the conference venue could have been better, I enjoyed the conference overall. Likely I would try to develop the format further. Single track is fine but given you have some of the leading minds of the industry in the same place, there could be room for more open sessions instead of a conference alone. Especially the outdoor activities seem to be the thing for WorkerConf and I hope they emphasize the aspect further. You can find my conference and hiking photos online

Minima - To-do lists done right - Interview with Alex Fedoseev

If there's one thing that transformed my life, it's the adoption of the Getting Things Done (GTD) methodology. The key part for me was to begin what's to be done and then committing on doing it. I use OmniFocus and Google Keep for this and while it requires some discipline, it has definitely been worth it for me. Alex Fedoseev has been developing a service, Minima, that achieves this online while aiming to provide functionality beyond personal usage. In this interview, we'll learn more about the solution and his technical approach for it. Can you tell a bit about yourself?# Alex Fedoseev I got into development pretty late, when I was 30 or 31. My original background has nothing in common with neither Computer Science (CS) nor User Experience (UX), but the latter hooked me up somehow, and it's still my main point of interest. For the past three years, I've been a frontend engineer at ShakaCode, and for the past seven years, I've been working on Minima. My current tools of the trade are ReasonML and Rust. How would you describe Minima to someone who has never heard of it?# Minima is a combination of personal task manager and project management solution for teams. But it’s not quite there yet. I kicked off MVP with the Personal Workspace only for now. How does Minima work?# For the users, Minima starts in the Personal Workspace—the place where they can manage their tasks: work, life, everything. In the future, they will be able to join Teams—one or many-to collaborate on projects. App follows GTD principles, so if you are familiar with it, you should feel like home. Here is the screenshot of the Project screen to give you a better idea: Minima project screen How does Minima differ from other solutions?# Lacking personal space was my main concern for years with many solutions on the market. So one of the main goals is to provide private space for each team member, to make it genuine first-class part of the project management solution for teams. Why did you develop Minima?# I was passionate about the whole project management thing but had zero experience in development (at that time, I was a project manager at marketing space). I was heavily inspired by Things—native personal task manager for macOS. It’s the best GTD implementation in software, in my opinion. And I would love to have something like this built into the team project management tool. Once I shaped up my vision of the app, I could stay on the product side of things and search for investors. But I always had a stance that you can’t efficiently manage the area that you didn’t master yourself. So I decided that it’s time to learn something new. At first, I was worried that someone is already addressing this problem. Mainly, because people use “This problem is already solved by X” argument quite often these days. But if you think about it, it’s silly to stop developing something you care about just because someone else is trying to solve the same problem. Take the car industry, there are dozen of vendors out there, and they co-exist just fine because customers choose different products for different reasons: price, features, design, etc. I don’t see how it’s different from the software solutions. Anyway, my only goal is to build a great product, not to win the race. What kind of technical challenges have you encountered during the development of Minima so far?# The hardest part of developing a side project is that you can’t do this full time. I spent few years in the loop: after the pause, I come back to the app, don’t remember where I am at, spending few hours per day try to continue with a complex feature, things get randomly broken here and there because I’m hardly in context, depression, pause, repeat. Types Get You Into the Zone# I can’t stress enough about how types are helpful here. Even after weeks, it takes minutes to get into the zone because everything you do starts with the types and then compiler reminds you what it’s all about, immediately slaps you if you do something stupid and keeps you in context 100% of the time. The Platform™# The second pain point was The Platform™. One of my goals with Minima is to make the user experience as close as possible to the desktop apps. Neither platform nor npm didn’t help much here. Packages I tried either over-abstracted, or hard to bind to, or doesn’t do what I need in the way I need. At some point, I stopped spending time on bindings and invested it in my solutions. At the time of writing, quill is the only UI related dependency used in production. Everything else—primitives, dialogs, focus/selection management—all internal implementations. It took time, but I mostly satisfied with the result, and owning your UI is a pleasure for many reasons. On Open Sourcing and Focus# Without a doubt, the most challenging UI part was drag-and-drop (and the most incomplete so far). One of my mistakes was that I open sourced it too early. Never do this. Implement internally → battle test in application context → abstract. Not the other way around. As a result, I spent a lot of time implementing the functionality that was not critical at all and burned out. After some break, I pulled it back into the app, addressed app requirements, and moved on. It’s still far from perfect, but issues are localized and will be addressed along the way. I will open source it back, hopefully, soon. Offline Support# The offline story is also tricky but brings a lot of benefits to the user. I use a combination of PouchDb + CouchDB for storing app data, the former is running in the worker thread, and almost all operations are optimistic and happen in the background, so main thread should be fast, responsive and spinner-free. Service workers are giving me a lot of pain, though, especially in Safari. I have an excellent infra manager implemented in Rust but guess it’s already long enough. What next?# The next big things in the pool are timed to-dos and mobile app. Lots of small/medium things. Once I feel comfortable with the app, I’ll finalize the pricing model. In the background, I’m prototyping Team Workspaces; its implementation is going to be quite different from the Personal Workspace implementation on the tech side of things, so I need to thought out details here. There is a Roadmap section on the site with a more detailed list. What does the future look like for Minima and web development in general? Can you see any particular trends?# Oh man, it’s all in flux. I was living on the bleeding edge for years, and I’m afraid of guessing what’s next. The web platform is being used as an app platform for some time, but it’s not welcoming space in its current state, IMO. I’m glad that types are coming to the web though. Objectively, I wouldn’t get to this MVP unless I switched to Reason and Rust. These technologies were like nitro to my progress. I sincerely hope they will gain more adoption in the near future. What advice would you give to programmers getting into web development?# Get a good starting point. If you target application development, something like OCaml/Reason, Elm, Rust, F#, Swift, etc. would work. I can’t imagine app development without sum types and pattern matching. Find a mentor, if possible. Self-teaching is slow and sometimes depressing. Learn to take breaks. If you tired and can’t grok something, it’s ok to let it go for some time. The above is the list of my mistakes haha. Who should I interview next?# Lots of great folks in the Reason community. Mentioning just few of them (in alphabetical order): BlaineBublitz — Gulp lead and contributor to Reason eco-system bloodyowl — without his work, transition to the new reason-react would be significantly harder bryphe — author of Oni cknit — great contributor to the Reason eco-system and the author of the next version of bs-react-intl DmytroGladkyi — running reasonml.online leostera — I heard he likes streaming & actors :) MoOx — maintainer of reason-react-native Sander_Spies — WASM hope of OCaml community thangngoc89 — author of sketch.sh UlrikStrid — doing magic things with Reason native infra wokalski & rauanmayemir — folks behind Brisk yawaramin — doing exceptional work on educating Reason community There are a lot more on the list, but I needed to stop at some point, right? Any last remarks?# If you find Minima interesting, there are few places, where I’ll be happy to answer questions or help with the problems: Slack Twitter Github Facebook Conclusion# Thanks for the interview, Alex! It's great to see when someone decides to push for their idea even if it means working a lot and expanding your set of skills. For me, Minima is a project to keep an eye on. If you want to get into GTD flow on the web, it's definitely worth a look! To get started, check Minima online you can also find parts of the project on Github.

Reakit - Build accessible rich web apps with React - Interview with Diego Haz

When building web applications and sites using React, you have to think carefully about the user interface. You might either go with an established user interface library, develop your own, or try a bit of both. One of the aspects that often is forgotten in UI design is accessibility - how can you make sure as many people as possible are able to use your creation? That is where using a user interface library that's accessible out of the box can come in handy. To learn more about such an option, I am interviewing Diego Haz, the creator of Reakit. Can you tell a bit about yourself?# Diego Haz My name is Diego Haz. I'm 29, and I've been programming for about 17 years. I started building Open Source Software (OSS) four years ago. I often say I don't like to code. I want to build things for humans and to impact their lives positively. Code is just the way I found to do that. I could be a dancer, but I'm terrible at dancing, so it's code. 😄 OSS is fantastic for achieving this. I can build one solution so many humans (developers) can use it to create many other solutions for even more humans. Besides that, I'm married to Grace Kelly with a five years old stepson. I'm autistic (Asperger), I love astronomy, and I hope someday I'll help solve hunger in the world by automating all the processes from the production of the food to its distribution. Automation is the key. How would you describe Reakit to someone who has never heard of it?# Reakit is a low-level component library for building accessible high-level UI libraries, design systems, and applications with React. It provides components like Dialog, Menu, Tab, Tooltip, Form, among others that follow all the WAI-ARIA recommendations. You could design a dialog using Reakit as below: import { useDialogState, Dialog, DialogDisclosure, } from "reakit/Dialog"; function MyDialog() { // dialog exposes `visible` state and // methods like `show`, `hide` and `toggle` const dialog = useDialogState(); return ( <> <DialogDisclosure {...dialog}> Open dialog </DialogDisclosure> <Dialog {...dialog} aria-label="Welcome"> Welcome to Reakit </Dialog> </> ); } If accessibility matters to you (and there's only one correct answer to this), you should use Reakit components as your foundation. You can play with the example on CodeSandbox. How does Reakit work?# You can install Reakit through npm: npm install reakit And then, use it like this: import React from "react"; import ReactDOM from "react-dom"; import { Button } from "reakit/Button"; function App() { return <Button>Button</Button>; } ReactDOM.render(<App />, document.getElementById("root")); Components can be imported directly from reakit or using separate paths like reakit/Button. The latter is preferred if your bundler doesn't support tree shaking. import { Button } from "reakit"; import { Button as Button2 } from "reakit/Button"; if (Button === Button2) { console.log("They point to the same file"); } If you use Babel, you can rewrite the imports using babel-plugin-transform-imports. This way you can use import { Button } from "reakit"; while gaining tree shaking. The idea works with other packages too. Components# The highest level API (which is still low level for most use cases) of Reakit exports React components. They receive two kinds of props: options and HTML props. Options are just custom props that don't get rendered into the DOM. They affect internal component behavior and translate to actual HTML attributes: import { Hidden } from "reakit/Hidden"; // `visible` is an option // `className` is an HTML prop <Hidden visible className="class" />; Besides that, all components can be augmented with the as prop and render props. <Hidden as="button" /> <Hidden>{hiddenProps => <button {...hiddenProps} />}</Hidden> State hooks# Reakit provides state hooks out of the box and you can also plug in your own. They receive some options as the initial state and return options needed by their respective components: import { useHiddenState, Hidden } from "reakit/Hidden"; function Example() { // exposes `visible` state and // methods like `show`, `hide` and `toggle` const hidden = useHiddenState({ visible: true }); return ( <> <button onClick={hidden.toggle}>Disclosure</button> <Hidden {...hidden}>Hidden</Hidden> </> ); } Props hooks# As the lowest level API, Reakit exposes props hooks. These hooks hold most of the logic behind components and are used heavily within Reakit's source code as a means to compose behaviors without the hassle of polluting the tree with multiple components. For example, Dialog uses Hidden, which in turn uses Box: import { useHiddenState, useHidden, useHiddenDisclosure, } from "reakit/Hidden"; function Example() { const state = useHiddenState({ visible: true }); const props = useHidden(state); const disclosureProps = useHiddenDisclosure(state); return ( <> <button {...disclosureProps}>Disclosure</button> <div {...props}>Hidden</div> </> ); } Styling# Reakit doesn't depend on any CSS library and components are without styling by default. You're free to use whatever approach you want. Each component returns a single HTML element that accepts all HTML props, including className and style. Learn more about styling. How does Reakit differ from other solutions?# The main difference is that it's entirely focused on accessibility. It's also low level enough so other solutions (like Material UI, Ant Design, Semantic UI React, etc.) could use it underneath. A similar library that also focuses on accessibility is Reach UI by Ryan Florence. It is a fantastic library, but the design choices make it harder to compose and customize. A good example of this is the use of implicit React Context. I prefer to give specific pieces so users can build new things without being tied to my design choices. They have more control over what they're making. You can always go from explicit to implicit (for example, you can build a React Context component API using Reakit API). But the other way around is hard. Here's an example of a high level Tooltip API built upon the low level Reakit API: import { Tooltip as BaseTooltip, TooltipReference, useTooltipState, } from "reakit"; function App() { return ( <Tooltip title="Tooltip"> <button>Reference</button> </Tooltip> ); } function Tooltip({ children, title, ...props }) { const tooltip = useTooltipState(); return ( <> <TooltipReference {...tooltip}> {(referenceProps) => React.cloneElement( React.Children.only(children), referenceProps ) } </TooltipReference> <BaseTooltip {...tooltip} {...props}> {title} </BaseTooltip> </> ); } If you want something composable and low level, you should choose Reakit. If you're looking for something already abstracted, with less boilerplate, easier to use, and with restrictions that make it harder to make mistakes, I recommend Reach UI. Why did you develop Reakit?# I started building Reakit in 2017 to ease my team's job as we were creating most of our components from scratch, and they weren't accessible at all. As an autistic person, I don't have any disability that makes the web inaccessible to me. But I do have disabilities that cause a part of the world to be unavailable to me. I know how it feels not to be able to do what most people do. And this motivates me even more to work on Reakit. What next?# I'm currently talking with a few companies so I can work with them and possibly use Reakit on real-world projects. Doing this will help me find real use cases and improve the library. Once v1.0 gets out of beta, I'll start building some paid products and services around it. The goal is to make Reakit self-sustainable, with, at least, one developer dedicated to it full-time. What does the future look like for Reakit and web development in general? Can you see any particular trends?# In 20 to 30 years, I believe that websites — and software in general — will not be made by humans anymore. Companies will upload their business requirements and their audience data into an AI, which, after testing infinite versions of the software with unlimited versions of simulated people, will respond with the best ready-to-use application based on the available data. Code and design will be fully automated. After all, there's no subjectiveness on this: the version which better performs is usually the best version. It's hard to see now, but Reakit and all the products I'm planning to build around it are my first step into this direction. What advice would you give to programmers getting into web development?# Learn to learn. Web development and front-end development specifically are evolving fast, and knowing how to learn things is the best ability one can have. Get used to watching videos in 2x speed (or quicker), learn how to search effectively, etc. Who should I interview next?# Pedro Nauck - Creator of Docz Bruno Lemos - Creator of DevHub Any last remarks?# Don't forget to support us on Open Collective. ❤️ Conclusion# Thanks for the interview, Diego! I think Reakit hits a good balance between providing functionality while letting developers to customize it to their own use cases. The greatest benefit of the approach is that it allows people to bootstrap their own UI libraries without having to develop everything from scratch while gaining functionality and avoiding some of the maintenance cost. To learn more about the project, take a look at Reakit website and star Reakit on GitHub.

Benefit - utility CSS library - Interview with Chad Donohue

One of the developments that has began to change the way we style our applications and sites is the introduction of utility classes. Tailwind is an example of a popular framework that has adopted the approach. To learn more about the approach, I am interviewing Chad Donohue about his library called Benefit. It's designed particularly React developers in mind and makes Tailwind a perfect fit with React. Can you tell a bit about yourself?# Chad Donohue My name is Chad Donohue. I enjoy creating user experiences and talking about design systems. I've written computer software as a Full Stack Engineer for a little over ten years. When I'm not in front of a computer screen, I spend time with my beautiful wife and three amazing kids. How would you describe benefit to someone who has never heard of it?# benefit is a Tailwind-compatible utility CSS library powered by emotion. It is framework agnostic, has a file size of 5kB, and it only ships the CSS styles that you use. How does benefit work?# Here we have a Button component: import React from "react"; export default function Button() { return <button>Click Me</button>; } We'll add a few lines to include benefit and add some Tailwind class names: /** @jsx jsx */ import { jsx } from "benefit/react"; export default function Button() { return ( <button className=" px-8 py-2 bg-blue-600 text-white font-bold tracking-wide uppercase rounded-full border-2 border-blue-700 shadow-lg" > Click Me </button> ); } By adding two lines and some additional class names, we have accomplished two things: We now have the power to style with all available utility classes (~10,000) at just a 5kB inclusion cost Only the styles associated with those class names were added, which happens only to be ~350 bytes Check out this CodeSandbox to explore this example more. At the point of inclusion within your project, benefit takes its default configuration (or your own if you need to customize it), then it generates CSS declarations in memory. As you use these generated class names in your markup, the styles are looked up in benefit's cache, auto prefixed, and injected into the document. How does benefit differ from other solutions?# On the client, benefit generates and injects styles for class names only when they are used. On the server, benefit pairs with emotion’s built-in SSR support and inlines CSS with the markup. Since benefit is powered by emotion in both scenarios, you also can tap into the power that it provides, like nested declarations and deterministic style composition. Also, being framework agnostic, benefit can be used alongside any JS framework. It can be introduced at the component level or at the root of an application. And, dead-style elimination is built-in. Why did you develop benefit?# I help build and ship 3rd party components. It is for sure an edge-case, but it brought on problems to solve for: [x] Style isolation - We needed the ability to normalize values (margin, padding, etc.) and sandbox the elements that made up our shipped components and not have to duplicate those normalized styles with every new component. [x] Dynamic branding - When our components are requested, we need to support different branding colors and typefaces, which means we are responsible for generating design system rules based on several incoming variables. [x] Rapid prototyping - Once these design system rules are decided, we need them to be reused throughout our component library. [x] No extra build step to generate styles - Since every request is different, we need to ship a runtime that is small and can handle dynamic style injection. Also, we need only to inject what we use. benefit started as an internal idea to solve these issues and has been through a few iterations. As it matured a bit more, we began to see how this could be a solution for both isolated components and full-blown sites alike. What next?# We are working to remove the runtime altogether for SSR. Soon, we'll have some examples put together for how this would work with something like Next.js. We're also working on the way to generate custom documentation based on a configuration. So, it will be easy to share visually what different benefit configurations look and behave. What does the future look like for benefit and web development in general? Can you see any particular trends?# As digital experiences increase in complexity, we have more of a responsibility as makers to take a look at what we are shipping to the end user. In the future, I see this getting better through the use of code-splitting and rendering on the server before shipping to the browser. The use of utility classes for styling will continue to gain popularity thanks to the great work over at Tailwind. Utility classes are a great pattern that DRYs up a lot of the view layer. I'm not saying that every page/application will only have utility classes, but the individual one-off styling needs will go down considerably. CSS Utility Classes and "Separation of Concerns" by Adam Wathan is an excellent read that talks about some of the benefits to be gained from styling with utility classes. What advice would you give to programmers getting into web development?# Make it a goal to learn something new every day and share your knowledge with others. This industry moves fast, and it helps tremendously to be able to step out of your comfort zone often. It is a gratifying profession that allows you to produce your best work while simultaneously learning something. Who should I interview next?# Andy Bell creates excellent experiences with simple, solid foundations. I'm always impressed with his work and his fondness for progressive enhancement. Sarah Drasner can always find an untapped topic and find a way to share it with everyone while also making it easier to understand. Her contributions to open source and teaching make this community so special. Eric Clemmons has been a mentor to me for some time now. I think people could learn tremendously from the viewpoints that he provides around developer experiences and building remote teams. Any last remarks?# Thank you for your time and interest! I've enjoyed sharing my thoughts here and am always around on Twitter and GitHub. Ask me anything 😄! Conclusion# Thanks for the interview, Chad! I can immediately see how adopting the utility class approach would help my in my daily development and I might have a project in mind that's a perfect fit for it! To learn more about benefit, check out the homepage for more examples and star the project on GitHub.

Webpack UI - Configure webpack with a UI - Interview with Even Stensberg

Webpack is infamous for being a bundler that's not entirely easy to configure. That's one of the main reasons why I wrote the webpack book available on this site. To be fair, webpack doesn't require as much configuration as it did before. To overcome this issue, Even Stensberg and a group of developers are currently creating a user interface for webpack. In this post, you'll get an idea of what the tool will be like and the early preview version will be available in August 2019. Can you tell a bit about yourself?# Even Stensberg Growing up playing World of Warcraft, I initially wanted to make games that I created myself. I was fortunate as a kid, and I'm thankful for growing up in Norway for that. I got a Packard Bell Computer early on, around the age of six, but the internet we had was too slow to do anything meaningful except gaming on high latency. At that age, I spent a lot of time learning English and read fluently when starting school because I had been playing WoW for about three years (sorry mom). The experience came handy when I was tipped to check out C++ by my uncle in 5th grade. I went to the local library and got hooked immediately. Serious Programming Starting from High School# Professionally, I didn't start doing serious programming until high school when I wanted to create a mobile app. By then I had taken a lot of courses in Computer Science, but I didn't know a language fully. I ended up developing in JavaScript using Meteor to cross-compile the application to different platforms and figured out that React was a better starting point. 0CJS# Before starting to develop anything, I had to set up configurations, and it took me one month to set up my initial webpack build that compiled a simple Hello World website. After becoming fluent in front-end I figured out that no one should use that much time on tooling, so I began making an automation library to automate this, now commonly known as 0CJS. I was lucky, Sean Larkin from Microsoft reach out to me about improving webpack, so I got the opportunity to implement this in webpack itself, which was great. It turns out that Google also was interested in this, so I got a gig to continue to improve webpack-cli for Progressive Web Applications, performance and to make it easier for new users to use the tool. webpack-cli# Since then, I've been actively maintaining webpack-cli, and we're releasing the final piece of the puzzle that we have been working on for a few years. I'm excited about this. The team has put a lot of effort making this happen over a long period of time, and we have many contributors that I can't all thank at once, but I want to thank everyone involved in making this happen. How would you describe Webpack UI to someone who has never heard of it?# New project Webpack UI is like creating a new game on Pokemon. You create a game, choose your desired player and Pokemon, then you are all set to explore. With webpack UI, you can import a project (game) or create a new one, choose your preferred libraries (player) and additional utilities; then you are all set to begin developing. The tool is intended to make webpack easier to use by having a Graphical User Interface for webpack and thereby making it easier to understand how to configure your project and get started more quickly. How will Webpack UI work?# Dashboard It's nice that you asked, cause I've been thinking about making it as easy for users as possible for some time. As a developer, most developers are known with how the terminal works. You will need to run a command like webpack ui and then a local server will open on your machine and start the application. From there, you will enter the dashboard, where you will have the opportunity to create a new front end application or to import an existing project. After doing so, you can compile a web application, add more things to your application such as offline support, other file formats, or analyze your project for performance issues. The team has a lot in mind, but we need to implement features gradually because we are an Open Source Organization, and we develop this tool in our spare time. How will Webpack UI differ from other solutions?# Starter packs There aren't so many features as Webpack UI because this trend of Graphical User Interfaces for libraries have just started, even though there are some. Webpack UI is much alike Vue UI, but it is different in many ways. This tool is intended to enrich an already cumbersome process of configuration files, and Vue doesn't have that. We have a Vue member helping out in webpack-cli with this project, and I'm glad that there are so many people that want to dedicate their time to develop this tool. For instance, with Webpack UI, you will have a full overview of a project, and you will be able to implement new features rapidly. As a developer, you will have the chance to view compilation stats, add libraries, measure performance, and learn about webpack without having to use a lot of time in the terminal. Most developers would argue that GUI's are bad because they slow down productivity, but we have figured that with webpack this isn't the case. Webpack is a tool that is hard to get a complete understanding off, and in particular new users get affected by this. We want to make the web better by providing a good baseline for developers to start building from. Why are you developing Webpack UI?# Scaffolds The end game of Webpack UI and what the team has been working on for so long is that developers have a lot of power adjusting advanced webpack configurations and front end projects, but still has a solid foundation to start from. Performance is a vast field in web development, mostly because this was recently being invested in. It is hard for new users to start with webpack, and that is what we are tackling here. After releasing Webpack UI, new users would be able to start developing front-end applications with no prior experience, which is suitable for educating new developers in the industry. We will see that reasonable defaults are set by default (by webpack), which allows applications to have the right performance metrics and clean project infrastructure. To condense: The hurdle of using webpack is going to be removed. What next?# Webpack UI is the last thing we are adding interface-wise to webpack for a while. After this project is done, we are focused on proper documentation with examples and tutorials and to help developers with any problems they might have using the tool. When a project is in maintenance mode, it is often a good sign, because we are not adding any new features and we are focused on having a project that works well without having to explain how it works. If we can make users use webpack without having to tell them what it is, we have done a great job. That is a double-edged sword because we want people to know how webpack works so that they can understand how web development works. First of all, we want to make webpack better to use without having to study it for weeks. What does the future look like for Webpack UI and web development in general? Can you see any particular trends?# The future of this looks bright. We've seen people quite happy with how Vue turned out and we expect the same for Webpack UI. In general, I'd like to emphasize that we will see more tools going to adjust to a 0CJS notion, which means that users will only have to install the tool to benefit from it. GUI tools and 0CJS have been a positive experience for a lot of people, so I expect nothing less in 2019 and beyond. What advice would you give to programmers getting into web development?# My advice is to use sandboxes and boilerplates a lot at the start. When you are confident in the language, you will be better prepared for a whole front-end environment with Docker, webpack, SASS, and everything that comes between actually creating a web application. We are doing a lot of things to make this easier in 2019 with webpack, but an understanding of the platform is the best way to get into web development. Above means that you are creating web applications in the old JavaScript format with a single html file, by adding styles in CSS and having fun. After being comfortable with this, one could start beginning to look at boilerplates and guides on how to configure a modern web application. Who should I interview next?# I advise you to interview someone from Vue. Any last remarks?# Sure! The early preview version will be available in August 2019 and the aim is to launch a new look for webpack-cli as well. We have high hopes that what we are soon about to release will positively help developers with their work and projects. We are actively looking for contributors, so feel free to send me a message if you are interested in helping out with any of this! Conclusion# Thanks for the interview, Even! It will be exciting to try out Webpack UI once there's a preview version to play with so I can't await for a public release. The UI combined with the goodies included in webpack 5 should bring bundling to a new level. You can check out the initial UI and provide feedback on it already.

ReasonML - Type safety with ease - Interview with Gabriel Rubens

I've had the chance to observe the evolution of the ReasonML ecosystem up close. For me, it seems to solve many pain points of the current JavaScript/TypeScript based web ecosystem by addressing many of the issues in the language itself. To understand more, I am interviewing Gabriel Rubens. His company has been using ReasonML in production for a while now, so I am curious to learn how it has worked out for them. To learn more about ReasonML, I recommend checking out ReasonConf. It's the first conference held on the topic, and it will be back in 2020! You should check out the conference videos online. Can you tell a bit about yourself?# Gabriel Rubens I'm co-founder and Head of Tech at Astrocoders. We're a lean and fast software company focused mostly on everything related to financial, tax, and banking applications. How would you describe ReasonML to someone who has never heard of it?# Try to picture how React would be with ECMAScript 2077. That's ReasonReact. I genuinely believe ReasonML is the future for JavaScript developers wanting a more robust and safe language. For me, the four of the essential features of the language, that changed the way I code for the better, are: Incredible type system Exhaustive pattern-matching Algebraic Data Types (ADTs) and variants Generalized Algebraic Data Types (GADTs) ADTs and GADTs# ADT is a data type in ReasonML which allows you to specify what are the possible ramifications of a value carrying that type in the program. The approach is complementary to the pattern matching. Consider the example below: type person = | Teacher(string) | Student(string); /* ** The most interesting part of this is that Reason can infer correctly ** the type of the argument `person` based on the pattern matching ** usage of it and that as you declared in `type person` the compiler ** will ensure you cover all the possibles cases. If you forget one ** it'll tell you which one you missed. */ let sayHello = person => switch(person) { | Student(name) => "Hello, " ++ name | Teacher(name) => "Hello, professor " ++ name }; /* ** This is why "nullables" are not necessary for Reason. The presence ** or not of value can be encoded with an ADT/variant, and Reason ** already brings a built-in type for this called `option('a)`. */ let studentName: option(string) = Some("Nicole") switch(studentName) { | Some(name) => "Hello, " ++ name | None => "No one to greet" } The above is extremely powerful as you can rely upon that the compiler will ensure all cases are covered. According to the Reason documentation: Philosophically speaking, a problem is composed of many possible branches/conditions. Mishandling these conditions is the majority of what we call bugs. A type system doesn't magically eliminate bugs; it points out the unhandled conditions and asks you to cover them. The best thing is that this style of programming is 100% idiomatic in ReasonML, you can't translate this very well to JS or TS as they weren't built for this. You can, of course, try to keep the same idea but won't be quite the same thing. JavaScript: const person = { type: "student", name: "Steven" }; const sayHello = person => { switch (person.type) { case "student": return `Hello, ${person.name}`; case "teacher": return `Hello, professor ${person.name}`; // There's probably a potential bug here without a `default` clause // as we can't ensure person.type will be always those two } }; TypeScript: type personType = "student" | "teacher"; interface Person { type: personType; name: string; } const person: Person = { type: "student", name: "Steven" }; const sayHello = (person: Person): string => { switch (person.type) { case "student": return `Hello, ${person.name}`; case "teacher": return `Hello, professor ${person.name}`; // With strict mode on the TS compiler can catch typos and // possible undefined return if you forget a case // though it's not 100% idiomatic it is achievable } }; GADTs are a more advanced usage of ReasonML to encode more complex logic into the type system, and you won't need the feature for the typical cases and most of the time you'll be just fine with normal ADT/variants. The primer by _shrynx explains GADTs in a great detail. How does ReasonML work?# Reason can be thought on as a syntax of the OCaml language, which is mature by itself. OCaml is excellent because it's straightforward to hack to the compiler to target the compilation to other things that are not binaries/byte code, that's where BuckleScript (BS) enters in the scene. Bloomberg started BS as an alternative of js_of_ocaml with usage more familiar to JS programmers. To learn more, read the introduction to the ReasonML toolchain by @thangngoc89. How does ReasonML differ from other solutions?# Compared to Elm and PureScript, ReasonML tries to be familiar enough to JavaScript developers without compromising OCaml lang features too much. BuckleScript also makes reusing existing JavaScript code easy, with great tooling for FFI/binding JavaScript modules. Why did you choose ReasonML?# We have been using ReasonML in my company for almost two years, and it was the best tech decision ever, we are shipping code with more confidence and without fear of refactoring for a lot of projects. One of them is an automatized tax calculation platform for Brazilians and expatriates working in Brazil (http://lion.tax). It was because of this tax solution why we looked up for a tool that would ensure we wouldn't commit dumb mistakes with other people's money. We try to open source our ReasonML tooling as much as possible. The most important ones that we are using internal are ReForm for form state management with type-safe multi-type support and BsReactNativePaper (which we handed over to Callstack, the creators of Paper). What does the future look like for ReasonML and web development in general? Can you see any particular trends?# I think with the rise of TypeScript/Flow and other type systems tooling for the web, the general developer public is more open to these ideas. Some projects are being built right now with ReasonML that will change what we expect about creating multi-platform apps. For instance, I believe Revery is the evolution of Electron. Esy also will change what you expect from a package management tool about the reliability of builds. What advice would you give to programmers getting into web development?# Don't try to learn everything at once, don't open all the links, and most important don't give up. It can feel quite frighting when you see the number of things you have to learn, but my tip is to try to build something and try to learn what is necessary for it, so when you come back to read the theories you'll have a better real-world context. Also, try to contribute as much as you can to open source, it's where you are Who should I interview next?# I have a bunch of names in my head of great developers I know: Diego Haz - The creator or Reakit Sibelius Seraphini - Relay advocate and contributor to the React Native community Bruno Lemos - Creator of devhub Heliton Nordt - Advocate. Khoa Nguyen - The creator of sketch.sh Alex Fedoseev - The creator of re-formality and other libs from ReasonML Any last remarks?# Apenas que busquem conhecimento If you are more interested in all these I highly recommend reading the v2 of Real World OCaml, watch this talk from Cheng Lou, and the ReasonConf 2019 playlist. Also, we are always looking for talented people at Astrocoders, if you would like to know us more send as an email Conclusion# I think ReasonML shows what JavaScript might become one day. That said, there's no reason to try it right now to see what you might be missing. The popularity of type systems seems to be rising especially thanks to the popularity of TypeScript. ReasonML is an example of what you can achieve when you go far enough. The ecosystem is still emerging but it's not a bad time to look into the language! Check ReasonML site to get started!

Express Gateway - A microservices API Gateway built on top of Express.js - Interview with Vincenzo Chianese

If you are dealing with microservices, you get questions like how to manage and orchestrate them. Vincenzo Chianese has come up with a solution designed for the popular Node.js web framework Express.js. Learn more about Express Gateway in this interview. Can you tell a bit about yourself?# Stoplight.io where I help to develop tools for API Developers. Formerly, I've been working for other API-tooling companies. I also organize the API Meetup in Barcelona that we're bringing it to the next level this year by turning in in a conference: API Days Barcelona I'm a Google Developer Expert in Web Technologies as well as an Auth0 Ambassador. How would you describe Express Gateway to someone who has never heard of it?# Express Gateway is an open source API Gateway written in JavaScript and running on NodeJS. It leverages the vast existing middleware ecosystem for Express.js An API Gateway is a centralized middleware that encapsulates the internal system architecture and provides an API that can be shaped based on real client needs rather than merely returning what the particular microservice is sending you back. These gateways are effectively implementing the facade pattern in the microservices world. API Gateway can have other responsibilities such as authentication, monitoring, load balancing, caching, request shaping and management, and static response handling. How does Express Gateway work?# Express Gateway is a bunch of components which declaratively build around Express to meet the API Gateway use case. Express Gateway’s power is harnessed the vibrant ecosystem around Express middleware. Express Gateway transforms Express into a dynamic runtime engine. For example - you can easily hardcode routes statically through Express’ API. In Express Gateway, however, those values can continually change and are dynamically injected as parameters without having to alter the underlying code. Essentially, all of the core components within Express Gateway make Express more dynamic and declarative. For instance, a Gateway that's proxying all requests to httpbin with a key-auth check is just 20 lines away. http: port: ${EG_HTTP_PORT:-8080} apiEndpoints: api: host: * serviceEndpoints: backend: url: https://httpbin.org policies: - proxy - key-auth pipelines: adminAPI: apiEndpoints: - api policies: - key-auth: - proxy: action: serviceEndpoint: backend In contrast — an imperative approach for this would require the following lines of code: const express = require("express"); const app = express(); const hostapp = express(); const passport = require("passport"); const HeaderAPIKeyStrategy = require("passport-headerapikey") .HeaderAPIKeyStrategy; const vhost = require("vhost"); passport.use( new HeaderAPIKeyStrategy( { header: "Authorization", prefix: "Api-Key " }, false, function(apikey, done) { console.log(apikey); done(null, {}); } ) ); hostapp.get( "/api/authenticate", passport.authenticate("headerapikey", { session: false, failureRedirect: "/api/unauthorized", }), function(req, res) { res.json({ message: "Authenticated" }); } ); app.use(vhost("domain.com", hostapp)); const listener = app.listen(process.env.PORT, function() { console.log("Your app is listening on port " + listener.address().port); }); Even though the lines of code are almost the same, Express Gateway offers: A wide array of ready to use policies Integrated error handling (which has not been taken into consideration in the code example) A minimal identity server to store your users, applications, and keys as well Hot reloading capabilities Container native software How does Express Gateway differ from other solutions?# When it comes to an API Gateway, rest assured that there are tons of alternatives; but when I was building Express Gateway I had the following principles in mind that probably are the key differentiation factors: Declarative file-based configuration: although the gateway can be configured using a bunch of API Calls, Express Gateway is usually defined with a yaml based file that can be versioned and it's easy to inspect/understand. Moreover, the Gateway's hot reload mechanism will automatically reload the configuration when the target files are changed/saved. Easy: The gateway is not inventing any new rocket science concepts. If you ever wrote an Express.js application, you can use, extend and read Express Gateway's source code literally within hours. The whole ecosystem of Express Middlewares can be installed in minutes. Spinning up an instance is a single command away. Why did you develop Express Gateway?# The spark was the frustration related to setting up all the other solutions I found so far in the market. Some of these require database instances; some require complicated configuration steps to get the minimal use case working. On the other hand, I wanted something that could accomplish 80% of the use cases with 20% of the efforts. And so the main principles were: I want to be able to spin up a gateway instance with no external dependencies with a single command I do not wish to have a database to store my configuration I want to be able to iterate/develop my gateway configuration without having to make 3000 protected API calls To keep it short, Express Gateway is an API Gateway written from a developer, for developers. What's next?# A lot is going on — even though right now I'm the only active maintainer on the project. We're looking to finalize the Ingress Controller integration for Kubernetes as well as support for OpenAPI 3.0 (importing documents). I also gave a presentation on the topics about how this integration could look like. What does the future look like for Express Gateway and web development in general? Can you see any particular trends?# I hope to get more contributors. Although we have different Fortune 500 companies using Express Gateway in production, nobody has ever contributed or even allowed me to publicly announce their Express Gateway usage, which is a little bit sad; therefore beyond the development efforts I'd like to focus on lowering the contribution barrier as well as mentoring people willing to work on the gateway. What advice would you give to programmers getting into web development?# Refrain the hype. Web development is changing, and it is very confusing even to get started. Shall I use webpack? Parcel? GraphQL? REST API? My advice: you're writing an application on purpose, focus on that. Who should I interview next?# I think you should get in touch with Stephen Mizell. He's been working on very cool stuff for a very long time, and it's worth voice some of the good things he's been doing. Any last remarks?# This has been great. Thanks a lot for the invitation and please keep going with this. I've discovered at least two good open source project to look at because of this interview series! Conclusion# Thanks for the interview, Vincenzo! It was great for me to learn about API gateways as I haven't seen the need or value before. Now it's clear to me how using one such as yours can cut some code while keeping functionality understandable. Learn more at Express Gateway site and consider starring the project on GitHub.

Experiences on typeof 2019

I was invited to typeof 2019 (27-29.03) as a speaker and a workshop instructor. It was a small-scale conference (about 200 people) held in Porto, Portugal. It was my second time in Portugal (first time in mainland) and I enjoyed the trip a lot. Overview of typeof 2019# typeof 2019 was held in the old customs building It was the first time typeof was organized and I would say it went well for a first time effort. Big thanks to Mindera for running things smoothly! I know running a conference can be rather stressful and I feel they did a decent job at it. No doubt the second edition will be even better. A Workshop Day and Two Conference Days# The conference started with a workshop day that had a webpack and a React workshop. I ran the webpack one based on my webpack book material. As always, you notice places where to improve the content and despite some struggles, we got through and even learned some things (I hope!). Webpack isn't the most exciting topic for a workshop and it might make sense to pivot the topic somehow so it's more exciting. After the workshop day, there were two conference days that followed a single track format. The talks varied from 30 to 45 minutes. Single track is my favorite format although I prefer shorter slots for an improved flow. The longer slots worked fairly well this time, though. 14 Speakers, Web Topics# In total, the conference had 14 speakers with varying web-related topics. For me, the schedule felt a little random as there were different types (inspirational, technical, topical) of talks during both days. Sometimes it may be better to group related talks next to each other they can build on each other. Overall, it was a minor gripe and you always need something to improve on for the next edition. Held in Porto, Portugal# The conference was held in Porto, the second biggest city of Portugal. As you might guess from the name, it's a coastal city. Porto is crossed by the Douro river and it is characterized by many hills which make the city feel different than most in Europe. I had a chance to explore its different parts and it felt like there are still places to discover. The character of the city changes greatly as you go from the city center to the coast and it made the trip more interesting for me. The venue had plenty of space Conference Day 1# The first day of the conference contained talks with varying topics and it was difficult to discern a specific theme. That said, there were insights to gain and I've listed mine below. SVG Illustrations as Components - Elizabet Oliveira# Elizabet Oliveira The conference began with a talk by Elizabet Oliveira. She discussed how to use SVG illustrations as React components and how to animate them. She also explained the rationale behind react-kawaii. Adding little touches like this can help to make a user interface to relate to the users better on an emotional level. Beyond Web Apps: React and WebAssembly to rewrite native apps - Florian Rival# Florian Rival Florian Rival from Facebook discussed how he ported his game creation tool from C++ based approach to a Web Assembly and React powered one. Although the new version doesn't match the features of the pre-port one completely, the users seem to be content with it. One of the side benefits of the port is that you can try the tool online. Check out GDevelop to learn more. Lights, camera, render! Getting your feet wet with WebGL - António Capelo# António Capelo António Capelo's talk was a great showcase in how to use WebGL to bring more life to a website or an app. I have background in 3D so there wasn't much new to learn but it was still cool to see how the technology is finding its way to the web. In particular, I enjoyed the displacement shader examples António used for slide transitions. It feels like WebGL is underutilized at the moment and it seems like a good direction if you want to add that extra touch to whatever you are doing. GraphQL without GraphQL - Juho Vepsäläinen# Juho Vepsäläinen In my talk, GraphQL without GraphQL,I discussed how I've evolved in my usage of GraphQL. Initially I began with by writing schema definitions and queries by hand. Since then, I've found ways to skip both by utilizing TypeGraphQL for the schema and a home-baked solution for the queries. In my current proof of concept, I derive queries based on type definitions of components. The topic likely warrants a blog post with an expanded discussion to show how the bits go together and why the approach might make sense. Doing the thing right, or doing the right thing? - José Fonseca# José Fonseca José Fonseca discussed different tradeoffs of a developer. Perhaps the main message for me is that you should put thought into architecture and how different portions of a system go together. It's also good to remember that the majority of our time we'll work on legacy systems. Then it can be more about delivering value to the client while progressively improving the system in terms of maintainability. Building the web in the web - Ives van Hoorne# Ives van Hoorne and Sara Vieira Ives van Hoorne discussed with Sara Vieira (live demonstration) how CodeSandbox allows web developers to build web in the web. The talk gave insight to the history of the tool and how it works together with VS Code these days (CodeSandbox UI is literally bootstrapped from VS Code). It's impressive what Ives has achieved in a few years and it's going to be interesting to see what's in store. Read the CodeSandbox interview to learn more. Creative Momentum - How to tap into your creativity to solve problems faster - Joana Galvão# Joana Galvão Joana Galvão ended the day by discussing the concept of creativity and how developers can tap into it. She discussed a basic process to follow and perhaps the main point was that developers are creatives too. I think it's a fair assessment as often the work includes coming up with creative solutions for the problems we face. Conference Day 2# The second conference day continued from the first and it was filled with talks from varying topics. useGraphQL() - Sara Vieira# Sara Vieira Sara Vieira demonstrated GraphQL in live fashion. It was also the talk that showed what's up with the new hook feature in React. A Modern Approach to Digital Product Design - Andrei Herasimchuk# Andrei Herasimchuk Andrei Herasimchuk has background in companies like Adobe, Twitter, and Figma, just to mention a few. He discussed his approach to digital product design. The key insight for me was the value of having a reference design. If you are working on multiple platforms, you should work starting from a reference design and add platform specific exceptions to it rather than the other way around. Doing this gives a more consistent UX across platforms and likely helps to avoid reducing development waste. The talk was one of the highlights of the conference for me. Vue.js and Design Systems - Ramon Victor# Ramon Victoor Ramon Victor showed how they use design systems in booking.com with Vue. It complemented Andrei's talk and gave a practical example. Paint the Web - Drawing with CSS - Eva Lettner# Eva Lettner Eva Lettner's talk focused on how to use CSS for drawing. Although the specification hasn't been designed to be used for graphics, there is a group of hacks that can be used to replicate common shapes. There's enough flexibility for generating something complex even. Perhaps the sweet spot is in adding subtle graphical touches here and there while using technology like SVG for heavier duty needs. Browsers - For better or worse - Renato Rodrigues# Renato Rodrigues Renato Rodrigues talk gave insight the security aspect of browsers and how to exploit them. The topic is somewhat undervalued at conferences and it was cool to see practical examples of how someone might try to exploit your site. Bringing awareness of these issues to the community is valuable. Designing a Design Culture - Sónia Gomes# Sónia Gomes Sónia Gomes has helped to develop a design culture in her organization and in this talk she went through the related process. For me, the key insight is that having cross-functional teams where designers work together with developers and product owners can help to foster skill development and culture while also creating empathy towards different crafts. Just like testing, design isn't something you can do later in the lifecycle of a product. It's a continuous process and even non-designers can and should design. Navigating the Hype-Driven Frontend Development World Without Going Nuts - Kristijan Ristovski# Kristijan Ristovski I had seen Kristijan's talk earlier in AgentConf 2018 so there wasn't much to expect. I think the core point is valid - running after hype doesn't make sense until it does. What I mean is, that there's often a certain point after which it makes sense to adopt a new technology. Although the transition comes with a cost, it often solves some pain point. Before jumping on a hype train, think carefully why you are doing it. Conclusion# Porto I think typeof 2019 had a great start and no doubt it will be a good conference next year as well. Especially the city of Porto is worth visiting and I feel the organizers did well with the event. Although the venue was a little out of the way, I think they made the right choice with it given it was a historical building (old customs) with a comfortable setting and it would be great to visit again.

Functional Programming - Interview with Arfat Salman

If there's one programming style I like a lot, it's functional programming. Although it's not the best fit for all problems, often having means to decompose a problem into smaller parts to be solved separately through composition has proven to be useful. In this interview with Arfat Salman, you'll learn more about the topic and how uses the technique in his daily work. Can you tell a bit about yourself?# In my spare time, I like to learn human languages. I learned Spanish as my 4th language (apart from Hindi/Urdu/English). Currently, I am trying to learn Japanese. I hope to learn Mandarin someday. I also co-organize and host a Spanish Meetup in Delhi in association with Duolingo. I love teaching about computers, speaking at conferences, tea and reading lots of books, especially science fiction. How would you describe Functional Programming to someone who has never heard of it?# Any language or style exists to give shape and structure to our thoughts. In the case of Functional Programming (FP), it is a particular set of ideas to structure our thought processes while programming. Once you face a specific programming problem, you may solve it using multiple approaches. FP is one of those ways. FP has its own set of constraints and guidelines and FP recommends that (among many other things): All variables should be immutable Loops should be implicit or use recursion instead of manual C-style for loops Functions and their composition should do the heavy lifting in achieving the solution as opposed to object-based inheritance (as in OOP paradigm) In essence, in FP, while coming up with a solution, we mostly think about what to do as opposed to how to do it. In the broader sense, focusing on function composition is what functional programming is. Consider the following program squaring a list of numbers: // Imperative/Procedural Style function square(arr) { const result = []; for (var i = 0; i < arr.length; i++) { result.push(arr[i] * arr[i]); } return result; } // functional style const square = arr => arr.map(el => el * el); In the procedural style, we mention "how" to achieve the computation via a C-style for loop. We keep track of loop counters, the termination condition, and the increment expression. We also manually populate the result by pushing in the result array. Finally, in the body of the loop, we specify what to do. In a functional style, we only specified what to do directly in the map function. The responsibility of looping, termination, and populating the result set is delegated to the map function itself. It can be argued that the procedural style has more "moving" parts that a programmer needs to keep track of while reading the program. For example, i < arr.length, i++, result.push(...). In the functional style, the cognitive load is arguably less, and the developer can only focus on the business logic (of transforming the input to output) at hand and not worry about maintaining the loop, for example. Consider another example of reversing an array: // Procedural/imperative Style const reverse = arr => { const reversedArray = []; for (let i = arr.length - 1; i > -1; i--) { reversedArray.push(arr[i]); } return reversedArray; }; // Functional style (using recursion) const reverse = arr => arr.reduce((reversedArray, current) => [current, ...reversedArray], []); In the procedural implementation, we had to consciously be aware that the initialization is not from 0 but arr.length, we have i > -1 in termination condition rather than the usual i < arr.length, and i-- rather than usual i++. Whereas in the functional style, we didn't have to think about those things at all. We use established functions such as fold (aka reduce) to do the heavy lifting. Haskell# Here's the same program, but even more succinctly written in Haskell: reverse = foldl (flip (:)) [] FP style is a lot like learning a new spoken language. We need to begin thinking in the other language using its grammatical constraints and cultural contexts. FP style is often symbol-heavy, and some syntax may seem unnatural. It does take a little bit of time to master it. As our programming vocabulary expands with practice, it'll become easier to understand programs like these even if we don't have any prior experience with the language in question. SQL# Most web developers already are intuitively familiar with declarative languages (of which FP is a part) without realizing it. SQL(Structured Query Language) is mostly declarative, though it supports other paradigms too. Here’s an example: SELECT Orders.OrderID, Customers.CustomerName, Orders.OrderDate FROM Orders INNER JOIN Customers ON Orders.CustomerID=Customers.CustomerID; In this, we only mention what we want the computer to do. Namely, to print OrderID, CustomerName, and OrderDate of all orders after merging tables Orders and Customers based on the CustomerID. We never specify how to merge two tables or how to select individual rows. CSS# CSS is also very declarative (though it is not considered a full language). Here’s an example: p { animation-duration: 3s; animation-name: slidein; } @keyframes slidein { from { margin-left: 100%; width: 300%; } to { margin-left: 0%; width: 100%; } } In this simple example, the slidein animation styles the <p> element so that the text slides in from off the right edge of the browser window. We have only specified the “start” and the “end” state of the animation. The intermediate steps are calculated by the browser, and this is true for almost all CSS properties. We never specify how to achieve the effect of a style, but only what effects to have. Functional Programming - A Different Way of Thinking# FP is often contrasted with the imperative style of languages such as C and Java. None of the styles are objectively better than the other. They are just different ways of expressing the same solutions and within relevant real-life constraints (such as efficiency and correctness), one style may trump the other. However, I can assure you that learning FP will be worth all that time. How do you use Functional Programming in your daily development?# I use JavaScript in my day-to-day tasks. Given JS’s functional heritage, I often try my best to use pure functions and functional composition to achieve a given task. I begin by making all variables const by default. Then I apply a set of transformations and transform the input values to the desired output. I substitute most loops with Array.prototype.map and/or Array.prototype.filter. I try to incorporate libraries like Ramda as well. I do resort to imperative style once in a while. However, I am conscious of that fact, and I mark that piece of code so that I can come back to it later after some research and see whether I can refactor the code. I am often able to refactor such programs for the better. At Pesto, we use ReasonML for the frontend and most of our product is written in it. How does Functional Programming differ from other solutions?# FP has some strict rules that are based in mathematics. It's a marked departure from the “replace the content of a location with updated values” style of solutions. Rich Hickey (creator of Clojure) called it PLOP for PLace Oriented Programming in his talk “The Values of Values”. Given the strictness of its rules such as pure functions, no side effects, no mutability, and statelessness, the compiler can optimize our programs and make it more efficient. In some cases, the compiler can also check and verify the correctness of our code. For more reasons, see the next answer. Why to learn Functional Programming?# Why not to? Even if one believes that FP has no practical use, I recommend learning it. Learning FP will give you a new perspective, and this will help you in expressing solutions in your current language better, whatever that language may be. If you know only one style of programming(or just one language), then you can fall victim to the "law of instrument" cognitive bias. It says that "if all you have is a hammer, everything looks like a nail". And for that matter, do not stop at FP. Go ahead and learn other styles too. Alan Perlis once said: A language that doesn't affect the way you think about programming, is not worth knowing. And Peter Norvig (Director of Research at Google and author of the book Artifical Intelligence) says to learn “at least a half dozen programming languages”, preferably of all styles. I wholeheartedly agree with Mr. Norvig and Mr. Perlis. In any case, here are the concrete benefits of writing programs in a functional style. Better Error Handling# We often have to deal with null in our programs. It is often done via option types. The following Rust code takes a guess numeric string and parses it to actual integers: let guess: u32 = match guess.trim().parse() What happens when we get a string such as "12a"? Most languages will either return null (or equivalent) or throw an exception. We can check for the nullness in an if condition or wrap the line in a try-catch. However, we can forget to do that. Also, the compiler is not enforcing that check. In Rust, here's how we do it: let guess: u32 = match guess.trim().parse() { Ok(num) => num, Err(_) => println!("String is not a number.");, }; In most functional languages, null does not exist. Its concept is often implemented via something called Option type. In the above Rust code, we add a block that pattern matches on two types Ok and Err for successful values and errors respectively. If parse is not able to turn the string into a number, it will return an Err value that contains more information about the error. Also, if you forget the Err part or even the whole block, the compiler can issue a warning that you have potentially unhandled edge cases. Unit Testing and Debugging# Since every value is immutable, and side-effects are prohibited, a function can only take the parameters and return a value based on those parameters. These types of functions are called pure functions. You can test every function in your program only worrying about its arguments and the return values. It's this property that testers appreciate. Because of the same reasons, debugging a functional program is easy. We can't modify global variables or alter values in another scope. All values are constants, and we can see how a function is transforming the input to output. Concurrency# A functional program is concurrent by default. While writing concurrent programs in an imperative language, we need to use thread locks to ensure that a shared piece of data is not modified by two threads simultaneously. Since immutability is baked into functional languages, nothing can change anything. No more deadlocks or race conditions. Machine Assisted Proofs and Optimizations# A pure functional language is based on strict mathematical theories of lambda calculus. The compiler can take a piece of code, and generate a more efficient version or check whether all the edge-cases have been covered or not. As we saw in the “Better Error Handling” segment above, the compiler was able to catch whether you are handling the error or not. It is even possible to create tools that analyze our code and generate edge cases for unit tests automatically! What does the future look like for Functional Programming and web development in general? Can you see any particular trends?# The software of today is becoming increasingly complex. Applications (such as WhatsApp) and games (such as Fortnite and PUBG) today are handling millions of concurrent users, and billions of bytes of data transfer per second. Increasing Complexity# Consequently, the programs comprising the software are increasing in complexity too. Also, software is being used in almost everything in the world, from mission-critical systems like rockets and heart pacemakers to general purpose appliances such as refrigerators and microwaves. On the other hand, hardware is becoming faster and better every day. RAM capacity is increasing, and the number of cores on a processor is expanding. Displays are adding Ks to their resolution year after year. Challenges of Parallelism# It is becoming increasingly important to write extraordinarily robust and stable software that can exploit the parallelism of the processors and increased memory capacity efficiently. Programming languages, in general, are moving towards a style where more work is performed during compilation. For example, Go has made expressing concurrent programs much more manageable than C/C++. Rust now combines functional features in a systems language. Immutability helps programmers not worry about inadvertent changes in one part of the code while modifying another one. New Environments# Webpack has allowed developers to use multiple resources without manually managing them. All these utilities allow programmers to think about the business logic at hand rather than the low-level details. Web and mobile applications are going to get bigger and hopefully better. Concurrency and distributed systems are going to play an important role. Our languages (and hence our thoughts) would always change to take concurrency into account. In web development, we should see the usage of Progressive Web Applications (PWAs) and better support for native-app-like applications based on Web Assembly. However, these should be taken with a pinch of salt as I can only speak from a limited perspective. What advice would you give to programmers getting into web development?# Keep learning! It may look like there’s a vast world out there, and it’s right for computer science and web development. But many people are ready to help you in every aspect. Super nice developers have put their articles, videos, and books, and software online for free. There is a very supportive open source community that you can engage with and learn. Keep an open mind, be enthusiastic and don’t lose hope. You’ll get it. :) Any last remarks?# Thanks for the interview! I am always ready to help students and developers alike. Feel free to reach out to me. I write about JavaScript and web development on medium. You can also drop me a message on Twitter or LinkedIn. Conclusion# Thanks for the interview, Arfat! I think you explained nicely why it's worth it to learn functional programming. I agree it's about expanding your vocabulary as a programmer. Often the ideas help you to decompose complex problems into easier to solve portions that might fit existing patterns already.

Dredd - Language-agnostic HTTP API Testing Tool - Interview with Honza Javorek

If there's one thing developers have to deal with most of the days, it's APIs. For web developers, this often means dealing with HTTP and external services. The question is, how to test the APIs and make sure everything works as we expect. For this reason, I am interviewing Honza Javorek, the author of Dredd, a tool designed for this particular purpose. Can you tell a bit about yourself?# my personal website. Since 2011 I have been helping to drive the growth and success of the Czech Python User Group. For years I have been participating in volunteer-driven meetups, courses, workshops, conferences, nonprofit, and more. I have the privilege of working on Dredd, an open source project, as my day job. How would you describe Dredd to someone who has never heard of it?# I'll assume the reader knows what web APIs are. Dredd is a command-line tool which tests whether a web API works as expected according to its specification. How does Dredd work?# When you are about to develop an API, you usually start with an idea of how it should work. APIs are interfaces between systems, teams or companies so to avoid future problems it often makes sense to make the ideation process collaborative. Ideally, to share and discuss the design of the API with other people, you write it down into a document. While a plain text file would be enough for collaboration, if you describe your API in a popular, structured format like API Blueprint or OpenAPI you get the benefit of having a machine-readable source of truth for your API, which can be then used by various tools. For instance, you can automatically generate documentation from such a file. FORMAT: 1A # My API Below is one of the most straightforward APIs written in the **API Blueprint**. You can use [Markdown](https://daringfireball.net/projects/markdown/) here. # GET /users + Response 200 (application/json) [ { "id": 1, "name": "@survivejs", }, { "id": 2, "name": "@honzajavorek", } ] When it comes to developing the API server implementation, Dredd can read the API description document, learn about the expected behavior of the API, then make requests to that API and confirm that it has returned the expected responses: dredd api-description.yaml http://127.0.0.1:8000 Dredd ensures that the API implementation never gets out of sync with the specification so your team and your users can be sure that the essentials you promised in the documentation are always correct. Moreover, since Dredd is an open source command-line tool installable from npm, it can be easily used as part of the API project's test suite and regularly used during development. It can be run on Continuous Integration such as Travis CI or Jenkins to ensure that no changes to the project can break the contract given by the API description document. How does Dredd differ from other solutions?# I don't think there's a real competitor to what Dredd does except for custom-made solutions. It's important to understand that Dredd doesn't completely replace other automated tests. You should still unit test your code and have integration or end-to-end tests to cover various corner cases. Dredd helps you to ensure the essentials you documented and promised in your API description file will always work in the implementation. Why did you develop Dredd?# Dredd is a project developed by the company I work for, Apiary (acquired by Oracle in 2017). Apiary pioneered the market of API design tools and contributed a bunch of open source projects to the world on the way, such as Dredd, API Blueprint, or API Elements. The original author of Dredd is Adam Kliment. I took over the project in 2016, leading and maintaining it until today. And why did we create Dredd? It clicks together with the design-first approach to developing APIs which we've been preaching for years. Especially people focusing on the frontend and working on API-backed SPAs know the frustration experienced when they need to work with an API which has been designed without their needs in mind. It is a common problem, and I believe it's mostly this frustration which drives people to try GraphQL instead of web APIs based on embracing HTTP and some of the ideas behind the REST architectural pattern. In Apiary we firmly believe that hard-to-use APIs come to this world mainly by being developed without any communication with their future users and by documentation being generated out of the final code instead of written with human beings in mind. The right way to develop APIs, according to us, is to design them first, to write the documentation first, and to communicate before any code gets written. But if you're supposed to write down a document and only then start to write code, how do you ensure that the implementation is done according to that document? The answer is Dredd. Dredd is essential for the design-first approach. So to be able to preach design-first, we had to create Dredd. What next?# We've introduced experimental OpenAPI 3 support in Dredd and are currently working on making it a first-class citizen among the supported formats. Except for that, I have many ideas on how to improve the developer experience with Dredd. The reporters could have better arranged and more useful output, for example. What does the future look like for Dredd and web development in general? Can you see any particular trends?# As I already mentioned, I can see people trying GraphQL as an alternative to web APIs. I believe there are good reasons why those two should coexist in symbiosis instead of replacing each other. I think that the future of Dredd in such a world isn't in supporting GraphQL, but in doing more for the web APIs so that developing them feels easier. Dredd could, for example, introduce a way to test not only the API server implementation but also the API client so the contract would be validated from both sides. Imagine Dredd could read the API description document, then test your frontend or mobile app requests, and in the end give you a verdict on whether they conformed to the API description. One more thing I'd love to have in Dredd is testing scenarios, i.e., series of requests and responses. What advice would you give to programmers getting into web development?# Even if you don't build web APIs in particular, I believe the design-first approach is a general idea you can apply to any interface as a programmer, whether it is an interface of a library you're building or a user interface of your website. My advice? Read Readme Driven Development by Tom Preston-Werner and How I Develop Things and Why by Kenneth Reitz. Then embrace the concepts and build stuff other people will love to use. Who should I interview next?# You should interview Mila Votradovec (@MilaVot) from Snyk. The company is doing a great job allowing people to use the open source ecosystem of their language to the maximum while making it easy for mere mortals to pay attention to security. Any last remarks?# Thanks for the interview! I'm happy to answer any questions about Dredd on Twitter or in Dredd's issues. If by any chance anyone reading this happens to be interested in Python as well, consider visiting this year's edition of the PyCon CZ happening on June 14–16th. It's the first time it's in Ostrava, and the team has managed to secure a fantastic venue for the event - former steelworks and coal mines premises. Conclusion# Thanks for the interview, Honza! For me, it's an entirely new way of testing I hadn't considered before and I can see value in the approach. I'm sure other people will too. To learn more, check out Dredd site and Dredd on GitHub.

Managing css-in-js Components with Namespaces

I've settled on using Emotion for styling my React applications. The API is close to styled-components and especially Emotion 10 is filled with functionality. I use only a small part of it in my daily work. As I've been working on the print graphics for React Finland, I've often come across the problem of maintaining different versions of designs in my files. I've found a way to solve this. Objects as Namespaces for css-in-js Components# Let's say I have basic styled components for a speaker/attendee badge like this: const Badge = { layout: { Base: styled.article`...`, Header: styled.header`...`, Content: styled.main`...`, Footer: styled.footer`...` }, with: { Logo: styled.img: `...`, Name: styled.span`...`, Company: styled.span`...`, Twitter: styled.span`...` Type: styled.span`...` } } I use layout and with to separate components by purpose. To turn this into a layout, you'll likely have something like below: function StyledBadge({ logoUrl, company, name, twitter, type, }) { return ( <Badge.layout.Base> <Badge.layout.Header> <Badge.with.Logo src={logoUrl} /> </Badge.layout.Header> <Badge.layout.Content> <Badge.with.Name>{name}</Badge.with.Name> <Badge.with.Company>{company}</Badge.with.Company> <Badge.with.Twitter>{twitter}</Badge.with.Twitter> </Badge.layout.Content> <Badge.layout.Footer> <Badge.with.Type>{type}</Badge.with.Type> </Badge.layout.Footer> </Badge.layout.Base> ); } I know the above looks a little verbose so what's the point? In addition, you would have a way to connect to data somehow but that goes beyond this post. Implementing Variants# Let's say you are going to organize another conference. This time around you'll want to display the data in different order or even different data. The most basic change would be to change the layout: function AnotherStyledBadge({ logoUrl, name, type }) { return ( <Badge.layout.Base> <Badge.layout.Header> <Badge.with.Logo src={logoUrl} /> </Badge.layout.Header> <Badge.layout.Content> <Badge.with.Name>{name}</Badge.with.Name> </Badge.layout.Content> <Badge.layout.Footer> <Badge.with.Type>{type}</Badge.with.Type> </Badge.layout.Footer> </Badge.layout.Base> ); } Given you might want stylistic changes as well, you'll need to adjust the components: const AnotherBadge = merge({}, Badge, { with: { Name: styled(Badge.with.Name)`...` Type styled(Badge.with.Name)`...` } }) It's important to note, that the merge operation you use here should likely retain the old definitions you have (i.e., not lose Twitter and others). The component also needs tuning: function AnotherStyledBadge({ logoUrl, name, type }) { return ( <AnotherBadge.layout.Base> <AnotherBadge.layout.Header> <AnotherBadge.with.Logo src={logoUrl} /> </AnotherBadge.layout.Header> <AnotherBadge.layout.Content> <AnotherBadge.with.Name> {name} </AnotherBadge.with.Name> </AnotherBadge.layout.Content> <AnotherBadge.layout.Footer> <AnotherBadge.with.Type> {type} </AnotherBadge.with.Type> </AnotherBadge.layout.Footer> </AnotherBadge.layout.Base> ); } The outcome is quite verbose and visually noisy. Since we already rely on a convention, likely there would be a way to fold this into a simpler structure that connects with the component definition (AnotherBadge). The above is what I consider a standard JSX solution. Perhaps the point would be write something like this: function AnotherStyledBadge({ logoUrl, name, type }) { return create(AnotherBadge)({ Base: { Header: { with: ["Logo", logoUrl], }, Content: { with: ["Name", name], }, Footer: { with: ["Type", type], }, }, }); } The create function would use the structures to generate the React code. Although I haven't implemented one yet, I can see it's not the most complicated one to write. It's something to experiment with in the future as I develop more layouts. with should support arrays of arrays as well as then you could support multiple content nodes within a container. Likely mixed usage (container + content) should be supported in some way too. Conclusion# Although the outcome looks verbose without the create helper I imagined, it's flexible to modify and it doesn't pollute the namespace of your module. You can keep the hierarchy flatter if you prefer. The technique can be combined with others. For example, you could consider developing basic primitives around a package like Styled System and then combine those with a style management approach such as this.

atomic-layout - Layout composition as a React component - Interview with Artem Zakharchenko

Often layouting a web page is an afterthought. Put a div here and there, sprinkle some CSS, and call it done. Perhaps you are more advanced and use CSS Grids to figure out exact positioning. What if there was an alternative way to achieve the same while having more power as a developer? That is something that Artem Zakharchenko is exploring with atomic-layout, a layout solution designed for React. Can you tell a bit about yourself?# I'm grateful to find a job and turn my hobby into a full-time activity. That hasn't stopped me, however, from endeavoring into side projects and open source. Today I'd like to share with you one of such projects. How would you describe atomic-layout to someone who has never heard of it?# Atomic layout is a React library that provides you with a separate layer responsible for spacial distribution for your layouts. The layer decouples components and spacing, which opens vast opportunities to reuse layout units, as they are no longer bound to some context via specific spacing properties. Practically speaking, it gives you a fast and maintainable way of developing responsive layouts that share global layout settings and create a unified system. How does atomic-layout work?# It generates React components based on provided CSS Grid template areas and controls their spacial relation via props. It also supports a feature called responsive props, that allows applying prop values conditionally, based on a breakpoint. I will demonstrate the workflow below, but we need to install the library first: npm install atomic-layout The current version of the library uses styled-components, so we need to install it too: npm install styled-components Would you like to use Atomic layout with another styling solution? Join the discussion! Basic example# Let's say we want to create a simple Header component that consists of three areas: logo, menu, and actions. First, we define a verbose areas definition of our Header: const areas = "logo menu actions"; Areas definition uses pristine grid-template-areas syntax. That's all it takes to create a layout of three equally spaced areas positioned inline. Now we can provide this area string to the Composition component of the library: import React from 'react' import { Composition } from 'atomic-layout' // Layout areas definition const areas = 'logo menu actions' const Header = () => ( <Composition areas={areas} gutter={10}> {({ Logo, Menu, Actions }) => ( <> <Logo> <img src="logo.png" /> </Logo> <Menu as="nav"> <ul>{...}</ul> </Menu> <Actions> <a href="/login">Log in</a> </Actions> </> )} </Composition> ) After passing our areas to the areas prop of the composition, it exposes a function as its children. That function accepts generated area components associated with the respective CSS Grid areas. There are plenty of props to apply to the Composition and other components exported from the library to achieve the desired effect. Responsive layout# Atomic layout is mobile-first and responsive by default which means that it has rich support of conditional rendering and responsive props application. Conditional rendering# To conditionally render one or multiple components, we can wrap them in the Only component, providing it with breakpoint constraints. import { Only } from "atomic-layout"; const Disclaimer = () => ( <Only from="sm" to="lg"> <p> Content displayed between small and large breakpoints. </p> </Only> ); Only component supports from, to and except props, and any combination of those. You can pass breakpoint names as the values, or use an Object in case of a custom breakpoint. You can use Only component just as any other React component. For example, you can render it nested within generated layout areas! When using the except prop, the children will be rendered everywhere except the given breakpoints range: import { Only } from 'atomic-layout' const Disclaimer = () => ( <Only except from="sm" to="lg"> {...} </Only> ) Read more about the Only helper component. Responsive props# Whenever a prop name is suffixed with a breakpoint name, its value is being applied only from that breakpoint and up. Take a look at how easy it is to have a per-breakpoint gap between the Header's composites (areas): import { Composition } from 'atomic-layout' const Header = () => ( <Composition areas={...} gutter={10} gutterMd={20} gutterLg={30} > {...} </Composition> ) Responsive props respect overrides, which means that would be applied as follows: gutter={10} on the xs breakpoint and up; gutterMd={20} on the md breakpoint and up; gutterLg={30} on the lg breakpoint and up. You can define custom breakpoints and use them as the suffixes of your responsive props (i.e. paddingRetina or alignItemsDesktop). The areas prop can be responsive as well! By providing different areas definitions on different breakpoints, we can alter the position and presence of our layout areas with a single prop. const areasMobile = ` logo hamburger `; const areasTablet = ` logo menu actions `; const Header = () => ( <Composition areas={areasMobile} areasMd={areasTablet} gutter={10} > {({ Logo, Hamburger, Menu, Actions }) => ( <> {/** * "Logo" is present in both area definitions, * and is always rendered. */} <Logo /> {/** * "Hamburger" is present in "areas", but missing * in "areasMd". It's automatically rendered only * on "xs" and "sm" breakpoints. */} <Hamburger /> {/** * "Menu" and "Actions" are present in "areasMd", * and are automatically rendered on "md" breakpoint * and up. */} <Menu /> <Actions /> </> )} </Composition> ); Welcome declarative layouts: you describe what and when to render, and let Atomic layout handle media queries and conditions. How does atomic-layout differ from other solutions?# Spacing as a first-class citizen# Unlike other solutions, Atomic layout's purpose is to distribute spacing. Spacing effectively defines a layout composition. That's why there is no predefined Col or Row components, but a Composition that can be anything you want. A grid is a composition of rows and columns, and a header may be a composition of logo and menu, and so on. There is no need to be specific when you are wielding the entire power of composition as a physical entity. Mentality shift# One of my favorite differences is that Atomic layout teaches you to think in terms of composition, which you can configure and render. Since its counterparts compose any layout element, you can get consistent components declaration throughout the entire application. Having a predictable way how components are defined makes their maintenance superb. Instead of deciding what CSS properties I need to create a layout, I started asking myself: "What does this layout consist of? What is the relation between its counterparts?" Encouraging CSS Grid# We try to make an experience of working with Atomic layout a fun way to learn CSS Grid and gain the knowledge you could apply without any libraries whatsoever. To do so, we minimize the amount of library-specific transformations of the options you provide to your composition. <Composition areas="header footer" templateCols="1fr auto" /> .composition { grid-template-areas: "header footer"; grid-template-columns: "1fr auto"; } Verbose prop names and no value transformations grant almost 1-1 compatibility with the emitted CSS. Fast# Needless to say that layout development becomes fast and efficient. You can develop production-ready components in a few minutes, without writing a single CSS line (if you want). And that involves responsive as well! Why did you develop atomic-layout?# During the work on one of the side projects, I've noticed that I repeat the same layout patterns over and over. So I've tried to abstract the logic that makes those patterns into a contextless layout unit. My admiration of Atomic design came into play, and in no time I realized that atoms and molecules could be described using CSS Grid template areas. One proof-of-concept later Atomic layout has been open-sourced. What next?# The roadmap is to refine the existing API, improve server-side rendering, and listen to the community to evolve this library. The mission is to provide a great experience when implementing layouts. What does the future look like for atomic-layout and web development in general? Can you see any particular trends?# I hope that CSS Grid will be getting more usage, as it's indeed a future of the web. There's also a lot of attention bound to TypeScript and GraphQL, and I believe they will be the main trends this year. As of Atomic layout, I would love to see people creating layouts with it, and sharing their experience. I hope together we can improve our process, encourage us to use modern technologies, and teach developers to think in terms of composition. What advice would you give to programmers getting into web development?# I wish newcomers to find a balance between practical and theoretical knowledge and don't neglect to have a deeper understanding of a subject, even if it means spending more time. Don't be afraid to fail, and don't fear the unknown. In the end, programming is about challenging yourself every day. Who should I interview next?# I suggest interviewing Honza Javorek (@honzajavorek), who is a person behind an API testing tool called Dredd. I'm also excited to join his team full-time to work on that project. Any last remarks?# Thank you for the interview! I want to invite everybody to the upcoming React Finland 2019, where I will be giving a talk on Atomic layout. I will be glad to answer your questions there, or via Twitter. Conclusion# Thanks for the interview, Artem! At least for me, it is a refreshing new way to look at how to develop and compose layouts. You can learn more about the approach in the video below: Check out Atomic layout on GitHub and read its documentation to grasp it better.

Pesto - A career accelerator for India’s top software engineering talent - Interview with Andrew Linfoot

One of the unique aspects of the internet is that it makes us all equal in a strange way. What it means is that collaboration is possible on a new level as we aren't restricted by our local communities anymore. The internet has led to changes in the way we work and also in the way we seek for opportunities. Andrew Linfoot runs a career accelerator called Pesto for the Indian market. To learn what he thinks about the topic, read on. Can you tell a bit about yourself?# I'm obsessed with building things, usually involving software and startups. Currently, I am making Pesto. When I'm not building things, I enjoy adrenaline sports, traveling, meeting new people and immersing myself in cultures radically different than my own. How would you describe Pesto to someone who has never heard of it?# Pesto is a school for talented software engineers in developing countries. We brush up their tech skills, teach them soft skills for remote work and then help them get full-time remote jobs at international tech companies. Doing this gives them a foot in the door to international tech careers that would otherwise be inaccessible. How does Pesto work?# Engineers apply at apply.pesto.tech. If they are accepted, they fly to our training center in Delhi and train with my team and me for 12 weeks. After training, we help introduce candidates to companies for interviews where they get significantly better opportunities than local markets (averaging a 6x increase in salary and much more interesting tech stacks). Our program is free up front. Students pay us via income share agreements (ISA). The approach allows us to provide insanely high-quality training and career support services, while still being accessible to people in India (where we currently operate). How does Pesto differ from other solutions?# There are other ISA based boot camps for learning to code (Lambda School being the most famous). However, these schools focus on teaching beginners to code from the ground up. We concentrate on upskilling experienced engineers. Since our students typically have 1-5 years of work experience, our curriculum has a significant emphasis on soft skills. We teach students about cultural differences between the US and India, how to be better communicators, how to manage their time in a remote work environment, etc. However, the unique thing about Pesto is the fact that the whole training program is designed to create a change in mindset. By the time students graduate, they have the confidence to believe that their career opportunities are unbounded and that they can participate as equals in the global tech community. The skills and connections are just icing on the cake. Why did you found Pesto?# The short version: Equal opportunity in the global economy does not exist. Most people don’t even get a chance. Their potential is permanently capped just because of where they were born. At Pesto, we are creating a world where access to education and remote work gives everyone equal access to opportunity, regardless of where they were born. The extended story: How a kid from San Francisco ended up starting a school in India. What next?# There are millions of brilliant people all over the world that are missing out on an opportunity only because of where they were born. We are on a mission to find these undervalued people and give them a chance to prove themselves. Step one: scale up in India. Step two: scale up globally. What does the future look like for Pesto and web development in general? Can you see any particular trends?# We believe that the future of work is distributed. When this becomes the norm, we want to be the access point for the world's talent. When everyone has equal access, humanity will unlock vast amounts of untapped talent. In terms of tech: I'm a massive fan of React, GraphQL, and ReasonML. We use ReasonML for our internal tech at Pesto, and I can't imagine going back to writing vanilla JS. When all front end code is statically typed from your GraphQL API down, the developer experience is magical. What advice would you give to programmers getting into web development?# Get involved in the open source community. The open source community is unique because some of the most brilliant minds in the field put their entire life's body of work out in the open for free. No matter where you are in the world, you can learn from the best. You can not only read their code, but you can see how they communicate and manage teams via Github issues and PRs. You can see how they think by following them on Twitter. You can email them, and you'd be surprised at how many will spend the time to talk to you. This kind of access is unimaginable in most fields. Contributing to open source can be intimidating. If you aren't sure where to start, try some little documentation PRs like fixing a typo or a broken link in a readme. You can also check out this free course by Kent C. Dodds. Who should I interview next?# I'm biased, but you should talk to Arfat, Pesto's Director of Education. He's a brilliant engineer and overall a super kind human. Any last remarks?# If you are interested in taking a bet on the undiscovered talent of the world, we'd love to have you involved with Pesto, either by interviewing/hiring some of our graduates or by sharing our story and helping to get the word out. Conclusion# Thanks for the interview, Andrew! What he described, might as well trigger a change. The great thing is that initiatives like this allow more people reach up to their potential no matter where they are born. To learn more, head to Pesto site.

Codecrumbs - Document a Codebase by Breadcrumbs - Interview with Bohdan Liashenko

Developers spend most of their time reading and understanding code. That said, not much has changed in the past decades in the way we do it. Perhaps the IDEs have become smarter but we still use roughly the same techniques as before. I prefer to jump around code and perform regular searches against it to understand how everything goes together. Bohdan Liashenko thinks there's room for improvement. As a result he developed Codecrumbs, a specific tool addressing the problem. Bohdan is one of the speakers at React Finland 2019. The presentation will be available later in a video format. Can you tell a bit about yourself?# codecrumbs (a tool for learning a codebase), js2flowchart (a library to convert code into flowchart), Under the hood ReactJS (a book which explains ins and outs of React). Currently, I study a lot about technical constraints and human-factor when it comes to building software products. I am passionate about software delivery processes and believe there is still room for improvements. How would you describe Codecrumbs to someone who has never heard of it?# Every time I joined a project or started digging into an unknown source in the past, I caught myself thinking that I am just jumping mindlessly between files, often opening the same file several times realizing that I've already seen this or that I just saw this place. I realized I needed a tool to mark essential places in a codebase; I wanted to automate the "pencil and paper" approach we all use when trying to understand the big picture of how things work together inside our code. That's why codecrumbs exists. Codecrumbs is a visual tool which helps to understand a codebase. The name "codecrumbs" is derived from “code” and “breadcrumb”, since the main idea is to control visual state by writing down comments (breadcrumbs) in your big code-maze (a reference to "Hansel and Gretel" story). Codecrumbs offers many features when it comes to learning a codebase: Trail of breadcrumbs - a sequence of codecrumbs can be used to describe a data flow (e.g., user login or form submit, etc.) Dependency tree - generate a dependency tree for an entry point. You can select connections and see “what is imported” and “its implementation”. Flowchart - builds SVG flowchart of selected file code Multi-codebase integration - helps to study connections between several codebases (sub-modules). On top of that, there is multi-language support (JavaScript, TypeScript, Python, Java, etc.) and the ability to share your findings with others by handy export and import feature. User interface of Codecrumbs How does Codecrumbs work?# Codecrumbs is a client-server application with communication via sockets. When you run codecrumbs command for a codebase, the server analyzes the project code and looks for comments containing “codecrumbs” (i.e., comments which starts with 'cc'), collects them and sends them to the client (running in a browser). The client imposes codecrumbs on the project files structure and draws an SVG image. There is support for “live updates”, so the process of use may be as follows: on one monitor — your code editor, on the other — the browser tab with the “codecrumbs” client. Write a comment — the scheme is rebuilt on the fly. Everything is implemented with JavaScript: server is Node.js and the client is built with React and Redux. How does Codecrumbs differ from other solutions?# There are no similar solutions. :) But if we take for example other tools for code documentation, they are either help to describe low-level code entries (functions signatures, classes APIs, etc.) or high-level ideas about code architecture without connection to the source code. Having either alone is not enough - on the one hand, you miss the big picture, and on another hand you know big picture but can't map it on the actual source code. Codecrumbs is precisely in between - it glues together code lines and source files with abstractive architecture. You can understand how a particular feature works from the first glance, and if you need to fix something or add new behavior there, you already have all handles where to look for in the source code. Why did you develop Codecrumbs?# I was frustrated with how inefficient we are when it comes to maintaining a big codebase. Especially with legacy code. Especially when you just joined a new project and feel hopeless for the first few months until you get thoroughly familiar with a codebase. According to the survey conducted by Stack Overflow last year, here's how we expect a newcomer to get up to speed with a codebase: 30% less than a month 44.7% one to three months 17.4% three to six months I believe these numbers are ridiculous. Well, they are fair, but I think we can do better, I think we don't make enough effort to solve the problems related to learning existing source code, and I am trying to change that with Codecrumbs. What next?# Well, it's just a beginning. Codecrumbs allows us to learn, document and explain a codebase much faster. Also, with Download & Upload feature it becomes super easy to collect and share your “investigation results” with others. The ultimate goal is to have many case studies hosted at the Codecrumbs site so people can learn collaboratively with real-world examples. More cool features are coming, stay tuned! What does the future look like for Codecrumbs and web development in general? Can you see any particular trends?# I can see new frameworks and libraries, or, even new languages (TypeScript, Elm, Reason, ClojureScript, Dart) being created all the time, to help developers to do the same thing — to write down application logic. But do we need that one more framework, which will help us to describe application logic a fraction better than we’ve been doing already? Maybe we do, don’t get me wrong, I like constant improvements in existing frameworks, and I love new frameworks and tools being created, but, I don’t think we should spend all our energy and give all our attention only to that. As an industry, we are too much obsessed with how do we write code; we want to make the writing process perfect, while the way how do we read code is left entirely behind. We don’t write code in the vacuum, almost always we instead edit or extend existing codebase than adding new features on the blank page. I believe that Codecrumbs has potential to show an excellent example of what you can achieve being more aware of the entire state of your application and how you can use that knowledge to change your product delivery and influence the world of software development as a whole. What advice would you give to programmers getting into web development?# The amount of technologies and "buzzwords" is overwhelming here, but try to be sober about that. In the end - you need to submit a form or show a popup. There are hundreds of ways to do so, pick the one you know, align it with team and release. Make it work, make it fast, make it beautiful. We (mostly) don't do rocket-science, so there is no need to over-think minor problems. Apart from doing something useful, the primary requirement for the code you write is to be good for a change you will need to do tomorrow. Any last remarks?# Thanks for the interview, that was fun. :D Conclusion# Thanks for the interview Bohdan! It's great for me to see solutions to appear in the space of maintenance as it feels underappreciated. We put a lot of effort into developing solutions for generating code but not so much into understanding it. To learn more about Codecrumbs and to try it out, head to Codecrumbs site and also see the project on GitHub.

packtracker.io - Webpack bundle analysis, for every commit - Interview with Jonathan D. Johnson

When using webpack to bundle your project, it's important to keep an eye on the output. There are multiple tools for this purpose. Now there's also a service. In this interview with Jonathan D. Johnson, you'll learn about packtracker.io. Can you tell a bit about yourself?# CodeShip by day and packtracker.io by night, otherwise you can probably find me outside. 🧗🏻‍♂️🏕 How would you describe packtracker.io to someone who has never heard of it?# It is a tool to help teams using webpack bundling, monitor and analyze they're overall JavaScript and CSS footprint. We report that information right in your GitHub pull requests, before they make it to your users. Ever accidentally added the whole lodash library when you meant to add a single helper? We'll catch that for you, and let you know your bundle size grew significantly right inside your GitHub pull requests. Another primary feature is the ability to set asset budgets to help your team stay within configured limits. We'll fail any pull request that brings your assets outside those budgets. How does packtracker.io work?# You sign up and create a project Install and configure our webpack plugin We take care of the rest How does packtracker.io differ from other solutions?# No other solution provides historical tracking and out-of-the-box pull request reporting. By this I mean, you can analyze your bundle history over time, helping to quantify optimization efforts and prevent slow size creep over time. Build status We also allow you to introspect the makeup of every commit, allowing you to explore your bundle helping you to identify redundant chunk contents and large dependencies. Bundle composition Why did you develop packtracker.io?# I saw a need! We were using webpack in my day job and quickly realized we had never audited our webpack output. After taking a look, it had gotten out of control, delivering megabytes of javascript and css to our users. Using packtracker.io, we were able to quantify optimization efforts and trim our asset output way down. Day to day, packtracker.io helps us make sure we never get in that situation again. What next?# I am looking forward to GitHub Actions! We currently have a GitHub Action developed that will significantly simplify the onboarding experience for our customers. What does the future look like for packtracker.io and web development in general? Can you see any particular trends?# JavaScript is eating the world, and I don't see that changing. Even popular non-JavaScript platforms like Ruby on Rails will be adopting the webpack/npm ecosystem starting with 6.0. I think the future is bright for JavaScript developers. What advice would you give to programmers getting into web development?# Don't skim over JavaScript! Even if your primary web application is not written with JavaScript, take the time to dig in and learn the language. Don't assume you know it because it looks familiar; that's how people develop a negative view of the language. Conclusion# Thanks for the interview, Jonathan! I can definitely see the need for a service like packtracker.io, and I hope people find it!

Overmind - Frictionless State Management - Interview with Christian Alfoni

Although state management solutions like Redux have become the standard, at least with React, there's still room for innovation in the space. Sometimes what happens is that technology becomes reinterpreted. When you can see the technology in context, you can also figure out what went right and what went wrong. It's this process that gives room for innovation. In this interview, Christian Alfoni will tell us about a new state management solution - Overmind. I interviewed Christian earlier about Cerebral, another state management solution. Consider Overmind as its spiritual successor. Can you tell a bit about yourself?s# Christian Alfoni I am a 35-year-old Norwegian developer who figured out coding was his big passion around 26. That said, I have spent many years face to face with customers, drinking coffee, and sometimes I miss that a lot. Everybody needs recognition in some form, and it just hits you harder when it is face to face. As much as I love open source you rarely get a face to face recognition of creating value for someone. "thanks, I learned something from you", or "man, I enjoyed using this tool to solve someone else's problem" beats 10.000 stars on Github any day. I have to admit I am at the airport with a delayed flight, got two beers in me and a bit of gin and tonic so I got a bit philosophical there. In terms of contributions, I have been sharing most of the stuff I have created. Sometimes bad ideas, other times, good ideas and often just iterating existing ideas to try to push them further. What I mostly care about is state management. It is one of these challenging problems as your perspective on it is heavily affected by the types of apps you build. It is almost like speaking different languages at times. But where I have given my perspective is with cerebraljs and Overmind. I have even contributed to the world of flutter with the flutter_observable_state package. I have also worked a lot on Codesandbox, trying to wield the extreme requirement to state management there. My efforts there will increase over the Summer and looking forward to showing people how we can separate our concerns, lower the threshold of contributions, and have great insight into how the application works. Read the Codesandbox interview to learn more about the service. How would you describe Overmind to someone who has never heard of it?# Overmind is a state management library. That said, it takes things a bit further and pushes you to conceptually and practically think of the UI of the application as an implementation detail. The components, no matter what framework, is just a powerful way to compose a UI. That means Overmind can contain all the state, effects, and logic required to make your application work. Separating all your state management from the UI is of course not a new idea, we have been doing this for ages. In this "components can do all world" I think it is crucial that someone states that even though components is an excellent UI composition tool, it is not necessarily an awesome state management tool. It can be a practical approach to keep those two things completely separate. At least that is my experience. How does Overmind work?# There is one decision you have to make early on with a state management tool. Should it be based on immutability or mutation tracking? In my experience immutability is a technically elegant solution. The problem is that the developer experience tends to suffer because of the amount of boilerplate code required. The issue of boilerplate applies from everything from defining a state change, to mapping state to components, to worrying about render performance if too much state is exposed, and so on. There were two things we wanted to do with Overmind. The first thing was to have an api-less api. The other was to give a kick-ass experience using TypeScript. Api-less api means that we take as much advantage of the native API of JavaScript as possible, which results in you defining your state as simple objects, effects as simple methods, and logic as everyday functions. That said, Overmind is aware of these building blocks and enhances them. See Christian's article about mutation tracking to learn more. State as Plain Objects# One of those enhancements is that even though you define your state as plain objects, arrays, strings, etc., it becomes reactive. That means when you change some state, Overmind can pinpoint what components are interested in that change, even though you use the native mutation api of JavaScript. devtools_state It can do this because it uses proxies. What is essential to understand about this is that mutation and proxies allows for a far more optimized notification of what has changed compared to immutability. With immutability, a change to a posts title in an array causes a change to the post itself and also the array with all the posts. That means any component looking at the array of posts will reconcile. Since UIs often consists of lists of things, this can often cause performance issues, as the whole list in the UI evaluates when only a single value changes but this is not the case with mutable proxied state. The chosen approach also increases debugging experience as Overmind can tell you exactly what value you changed, and it knows the dependencies of the components. devtools_mutate All Actions Have The Same First Argument# The second enhancement is that every function, or actions as we call them, will have the same first argument. This argument is injected and contains all the state, effects, and other actions of your application. That means there is no isolation. Holding your state, logic, and effects in isolation can cause more harm than good in my experience. Splitting up your domains and concerns should be a discipline, not forced upon you. Who has ever created an application where you know from the start exactly how your state and logic should be split up and contained in the end? Does an application ever "end"? By allowing you to freely explore the domains of your application without forcing you to refactor is freedom. And you know, sometimes you do have cross-domain state access and logic. There are other aspects I could go into related to Overmind, but I think those are the two most fun to bring up :) Example of the API# Here is an example of how straight forward the API is. The mutations are locked to these actions. You also see how we put an effect abstraction around the actual fetching of the posts which is the essence of Overmind, API simplicity. To boot the devtools tracks everything that happens here, even the effect: export const loadPosts = async ({ state, effects }) => { state.isLoadingPosts = true; state.posts = await effects.api.getPosts(); state.isLoadingPosts = false; }; Since we use proxies, we can also make sure that whatever state you expose to components you never worry about render performance. Whatever state is accessed by a component is tracked and will cause the component to render again, if changed. With hooks in React you get a pretty rich API for exposing state and actions to your components: function MyComponent() { const { state, actions } = useOvermind(); return ( <div> <button onClick={actions.decrement}>-</button> {state.count} <button onClick={actions.increment}>+</button> </div> ); } How does Overmind differ from other solutions?# You know, the two big solutions out there now is Redux and MobX/MobX-State-Tree. Where Overmind differs the most is that it is not just a state management solution, meaning it defines and changes state. It also manages effects. If you think about it, it is quite insane how we keep importing all these generic tools directly into our code — locking ourselves to "the current technology". In practice, this might not be such a big deal, but from a coding principle, it is not a good practice. For example, in Codesandbox, we have an API based on GraphQL. In a React component, we import apollo-react directly and query some sandboxes for the dashboard. While this is easy and straightforward, we just made React, Apollo and GraphQL a hard dependency of the Codesandbox application. That is not good. If we rather created an effect API server.getDashboardSandboxes() we suddenly have no requirement to React, Apollo or GraphQL. All of them becomes an implementation detail. And this is what Overmind is pushing. Your application is the state, effects, and the logic to manage them. The UI is an implementation detail, and the tools you use to allow the app to talk to the outside world is also an implementation detail. What this results in is logic that is "to the point". All the naming is explicitly related to the domains of your application. You can change out the UI or run the same app on multiple environments, and you can change out any of the tools your application uses to talk to the outside world, without touching the application itself. It is just about separation of concerns really, but with a concept of effects, you know where and how to do that separation while gaining additional debugging information. devtoos_effects Example of a Complex Action# When you need to work on complex logic like debounced search, composing actions together, etc. it can often be expressed better in a functional way. The great thing is that you can move between the default imperative actions and the functional API to your liking: export const search = pipe( mutate(({ state }, query) => { state.query = query; }), filter(({ state }) => state.query.length > 2), debounce(200), mutate(async ({ state, effects }) => { state.isSearching = true; state.searchResult = await effects.api.search( state.query ); state.isSearching = false; }) ); Why did you develop Overmind?# I learned so much developing cerebraljs. I think it is still a compelling way to write declarative logic, managing state and effects, but there is no way it will ever have first-class support for TypeScript. The API is too "exotic" so TypeScript was one reason. The other purpose was this push back and forth about value comparison VS tracking mutations. You can think of Redux as value comparison and MobX as tracking mutations. The two approaches affect how much time you spend "boilerplating", time spent evaluating performance issues and also the API is affected by the two methods. For me, it is only a question of which one of the approaches gives the best developer experience. I love the implementation of Redux. It is such a neat and straightforward idea, much because of immutability. That said, the resulting API with reducers, configuring the project, boilerplating actions, and mapStateToProps and being careful about what state you expose related to performance does not give the developer experience I want. MobX is opposite. The implementation can seem magical, even "hackish" (although it is not) and you go against "mutation is the root of all evil". But the developer experience is impressive in comparison! Overmind is taking experiences learned from Redux, MobX and Cerebral and tries to create the best possible developer experience possible based on my personal experience and the feedback from the Cerebral community and people testing Overmind during its development. Read the MobX interview to learn more about the state management solution.. See also the Redux interview with Dan Abramov. What next?# For me? Well, I am going 100% freelance after summer and will spend my time with startups. I have been helping out quite a bit with Codeandbox since the beginning, and now that they are funded, I can help out even more. So really looking forward to sharing experiences building such an insanely complex application, both in terms of state management and UI composition :) What does the future look like for Overmind and web development in general? Can you see any particular trends?# What I learned from Cerebral is to "lock the API" after the official release. We spent six months iterating the API of Overmind to see how "API less" and straight forward we could make it. Although you almost do no typing in your app, there is a lot of typing inside Overmind; this also took a lot of iterations to get right. That does not mean there will not be new features added to Overmind, but we will not change out anything. If we ever get to a point where we want to do something radically different, that will be a new project. So now that we have our API, we want to see what we can do with the concept of building apps without the UI. As previously mentioned, the idea is nothing new, and you can do it with all the before suggested solutions, but the tooling for doing so can be improved a lot. Even to the point where you do not have to fire up the browser to work on your state and logic. No more questions about where should my state and logic live? Inside a component? Where in the component tree? You build your app, and then you attach a UI to that app. That does not mean component state, and logic is terrible; it is essential. But the state and logic you put in components will instead be related to building the actual UI, not defining how your application should work. This way of thinking has helped me, and I want to explore it even further. "UI as an implementation detail" :-) What advice would you give to programmers getting into web development?# Take courses and try to build something, even though it has been created before. There are no right and wrong. React is not wrong about their approach; Vue is not wrong about their approach, and Angular is not wrong about their approach. You can build most things with any one of them. Who should I interview next?# You know, I have this guy I do a podcast with which built a library called Immstruct I long time ago. It was quite popular and even mentioned related to React. What is interesting about this project is that it has many of the ideas popularized today, also related to components. Even though this project does not have broad usage now, it could be fun to see that these ideas still live out there in other types of implementations. :-) So like, do the same interview as if Immstruct was released today, how they thought about web development then, the future, etc. Think it would be interesting :) Anyways, just an idea! Any last remarks?# You know, building tools and putting them out there is not easy. Cause it can often come out as "What you are doing is wrong, this is the way to do it". But that is not the intention. The intention is to share concepts and approaches to solving problems. We spend a lot of time in front of the computer trying to solve real problems for people, the more we iterate and share knowledge around tools, the more effective we become at solving these REAL problems. Conclusion# Thanks for the interview, Christian! I think Overmind is an excellent example of what can happen when you reinterpret what exists while taking the current constraints, or lack of them, into account. The fact that Proxies are now widely supported by the browsers has opened new doors for developers to explore. To learn more, head to Overmind site and check it out on GitHub. Note that the solution works with React, Angular, and Vue.

webpack-config-plugins - Best practices for webpack loader configurations - Interview with Jan Nicklas

Managing webpack configuration can get tough especially if you try to track best practices and optimizations. To address this problem, Jan Nicklas has come up with a solution in the form of webpack plugins. Can you tell a bit about yourself?# This fascination has never stopped, drove me into studying CS and somehow turned a hobby into my job. By now I have worked for nine years in Germany, Austria, Switzerland, and the US. In 2015 I discovered that webpack would allow me to write an offline-capable web app, but I had to repeat some manual steps on every change. Because of this, I discovered html-webpack-plugin and eventually became its maintainer. How would you describe webpack-config-plugins to someone who has never heard of it?# Webpack provides already very powerful defaults, but to be flexible, it asks you to configure your loaders manually. webpack-config-plugins offers you multiple presets for your loader configuration to reduce the boilerplate part of setting up webpack. How does webpack-config-plugins work?# During the plugin initialization phase, the webpack-config-plugins presets will inject additional loader and plugin configurations to your webpack configuration. Webpack provides a mode option which allows you to optimize it for development or productions. webpack-config-plugins uses this mode to decide whether the presets should be optimized for development or production. How does webpack-config-plugins differ from other solutions?# There are already excellent boilerplates available. Unfortunately, most of them are opinionated and add another layer of configuration on top of webpack and lock you into that. webpack-config-plugins is an attempt to break these boilerplates into smaller pieces to allow you to pick the presets you need and to keep you in control but still let you update your loader configurations. The configuration of a preset is the same you would write into your webpack config manually. Because of this, users should be able to understand or even modify presets quite easily. Why did you develop webpack-config-plugins?# Our requirements often led us away from prebuilt boilerplates. However, after some month of development, it was costly to upgrade webpack, webpack plugins and webpack loaders because we didn't have tests for our webpack configurations. The problem led to the idea of automatically tested webpack configurations to understand which loader or plugin upgrade would influence your build asset results or your build performance. What next?# The automated performance tests seem to provide a lot of value. However, the work on them has just begun, and there is more to be done. What advice would you give to programmers getting into web development?# Try to find a small design or app that YOU really would like to build and start. Your first work doesn't need to be perfect, and you will learn a lot from your mistakes keep going until you reached your goal :) Who should I interview next?# Even Stensberg (@evenstensberg) works hard at the webpack-cli. Conclusion# Thanks for the interview Jan! I think packaging webpack best practices into plugins is a worthwhile idea. The approach is flexible enough to let users pick only the presets they find valuable while maintaining the rest on their own. Check out webpack-config-plugins on GitHub to learn more! See also the online app that helps you to choose the right presets.

SurviveJS - Summary of 2018

2018 was mostly a consulting year for me. I also did travel within Europe and discovered countries such as Croatia and Armenia. It was my first year as a conference organizer as I helped to set up two conferences in Helsinki, Finland. Both were well received, and there's more to come. It may evolve into a business direction of its own. Consulting# My main clients in 2018 were eBay (Berlin) and Kleiner Perkins. The former case was a continuation to training performed in 2017, and I spent an entire month in Berlin with the client. The Kleiner Perkins case was more involved than the eBay one, and I dedicated most of the year for it. The starting point for the case was Reactabular, a table framework for React I had developed earlier. Given I was too busy with the eBay case when this one popped up, I handed it to Andrey Okonetchikov, a good developer friend of mine in Vienna. Once I came back from Berlin, we spent the remainder of the year on the client project, and we work nicely together by now. The cases were technically demanding, and I learned a lot in the process. I see now why TypeScript is becoming the mainstream option for larger scale projects. I also appreciate the component driven development approach more. Even though building a style guide requires effort, it can have severe payoffs as you don't end up re-invent the wheel again and again. Outside of this work, I did GraphQL related development for my conferences to make them easier to run. I saw the light with css-in-js finally. Especially the next generation of the libraries, including solutions like linaria and astroturf, seems promising as it avoids the runtime cost generally accepted with css-in-js. Publishing# I didn't have as much time to focus on publishing as on earlier years. The blog grew by thirty posts, most of which were interviews. I'll continue on the interviews as I feel it's an excellent way to highlight ideas that might otherwise go unnoticed. The webpack book received updates for webpack 4 and the maintenance book improved. There's still plenty of improvements to be made. Public Appearances# I visited several conferences during the year and even spoke in a few. The highlights for me include AgentConf, WebExpo, WebCamp Zagreb, Concat, ScriptConf, and Halfstack London. I also visited JIMDO at Hamburg, spent some webpack time in Munich, checked out YGLF in Kiev, saw ReactiveConf in Prague, went to JSCamp in Barcelona, and finally spoke in JSConf Armenia. I was one of the hosts for React Finland 2018 and also helped a lot with GraphQL Finland 2018. That's enough for one year, and I hope to travel less in 2019. React Finland 2018 - 24-26.4.2018, Helsinki# React Finland was the first trial of fire for the people organizing it. The event was a great success and now we are looking forward to the next edition held at the same dates in 2019. We've begun early bird sales and refined the format further. It's still three days and single track but this time around we took the idea of themed sessions and went all-in with it. GraphQL Finland 2018 - 18-19.10.2018, Helsinki# Encouraged by the success of React Finland, we decided to organize a GraphQL themed one in the form of GraphQL Finland. The target of the conference was to have a vendor-neutral, international conference and I feel we reached this goal. We took some of the learnings for the first conference and then proceeded with the second one. Again, it looks like there's going to be a second edition in store. Organizing both events gave me insight on the other side and increased my appreciation for people running events. It's a great exercise in logistics, and it takes a lot of effort to pull off an event. During this process, I've developed tooling to support these ventures, and I expect I'll do further related work this year. Business# Business-wise I'm in a good position for 2019. The new event direction is promising, and plenty of work awaits even if client work doesn't become available. At least for the time being, I'll try to avoid cases with full-time commitment as it makes it tricky to find the energy and time required by writing. That said, I'll remain open to consulting and coaching style work where I can help an organization to reach their technical goals or to improve the way they work. As in 2018, I am open to collaboration, and if there's something that goes beyond me, I will tap into my network to find the best fit for the case. I hope to get new book versions out and even reach new editions on some. Before starting something new, I would prefer to have three solid books out there. Likely this will lead to some form of branching again where one book becomes two when it's too big to maintain. Conclusion# Even though 2018 didn't turn out as I expected, I feel it was a good year still. I am particularly happy that the conferences I helped to organize turned out well. I also gained plenty of expertise which will come handy in 2019. I have a stronger understanding of the stack, and I can see where the world of web development is heading. We are living in exciting times.

webpack-plugin-serve - A Development Server in a webpack Plugin - Interview with Andrew Powell

Usually, when you use webpack, you also have to set up its development server as well. Traditionally doing this hasn't been trivial and has required a certain amount of expertise. That is one of the reasons why I wrote the webpack book available on this site. To learn about an alternative approach, I am interviewing Andrew Powell, the developer behind webpack-plugin-serve. Can you tell a bit about yourself?# Florida Man, have been a remote developer for ten years, and am pretty passionate about Fishing and Fishery + Ocean Conservation. There's a good chance I'm on a boat as you're reading this. On the nerdy side of things, I love me some Team Fortress Classic, have owned an N64 for forever, and my focus these days is squarely on Node.js for Backend and DevOps. How would you describe webpack-plugin-serve to someone who has never heard of it?# I'd wager the conversation would go something like this; Sean Connery: What in the bloody hell is a webpack-plugin-serve? Me: Mr. Connery, you're drunk again. Relax. You know webpack, and how Webpack has plugins to do all kinds of different things right? Sean Connery: I love me some bundles. Pass the Scotch. Me: Right. So to test your bundles on your machine, you need to run a local web server. Something like Express, Koa, or Python's SimpleHTTPServer would do the trick if you just needed something basic. Sean Connery: Pythons! Going to get my Walther PPK, will dispatch them with haste. BRB. Me: Moving on. Now, if you didn't have to spin up your server and take care of all the setup so the server would know where your bundle is and which files to serve, for each bundle you work on, wouldn't that be swell? Sean Connery: I am quite swole, yes. Me: So that's where this plugin comes in. It'll create a web server that stays running for the duration of the webpack process and goes away once the build process has been ended. Leveraging webpack's watch option keeps the process running, so the server doesn't have to. Sean Connery: This conversation is sobering. Me: Indeed, mind-blowing. And to top it all off, there's no new CLI to learn, no avalanche of CLI flags to understand, and you can pretty much mold it to your environment's needs. Sean Connery: Me: Right? How does webpack-plugin-serve work?# webpack-plugin-serve is a self-contained development server triggered by a webpack build and part of the webpack process. Users must add an instance of the plugin to their webpack configuration. A configuration might look something like this: const { WebpackPluginServe } = require('webpack-plugin-serve'); const options = { ... }; module.exports = { // an example entry definition entry: [ 'app.js', // important: this is where the magic happens in the browser 'webpack-plugin-serve/client' ] ... plugins: [ new WebpackPluginServe(options) ], // important: webpack and the server // will continue to run in watch mode watch: true }; With that, we're leveraging quite a bit of 3rd party packages to make it all work: When a webpack build is initiated, the plugin sets itself up. That includes a Koa application, middleware is set up, and a few other static goodies that need to be ready to go down the line. Once a build starts, the plugin leaps into action. A web server is spun up and attached to the Koa application, a WebSocket server instance is connected to the web server, and the plugin begins listening to the compiler instance for notification of a refreshed build. If Hot Module Replacement is enabled, it'll communicate changes via WebSocket to the client/browser, and you'll see changes based on the options passed to the plugin. There are different ways to configure and use the server, and we've prepared a few recipes so it's faster to get started, and we're keen to add more. How does webpack-plugin-serve differ from other solutions?# First and foremost, it's a plugin. Before starting development, we searched quite a bit to try and find a pre-existing, similar solution and we believe this is a novel approach for Webpack. As a plugin, it doesn't have the learning curve of a separate Command Line Interface, and there are no subsets of flags to learn or understand to use it. Plugins are one of the first things that new Webpack users learn about - a perfect entrypoint for a bolt-on development server. And by leveraging the compiler directly, the server can offload responsibilities, like file watching, to the compiler and can avoid reinventing the wheel, thus reducing complexity. WebSockets without a Second Server# Just as with webpack-serve, we chose to use WebSockets for server-client communication (the magic that enables Hot Module Replacement instructions in the client/browser). Unlike webpack-serve, we were able to leverage a new "serverless" WebSocket server implementation. We learned from webpack-serve that while the intention behind a secondary WebSocket server was good, it increased complexity and issues with minimal benefit. Simpler Usage Due to Simpler Architecture# We also took the approach of building in support for the most popular feature sets of the other two development server options. Only this time around, there's no getting fancy with it: User-defined and user-ordered middleware is available, though vastly simplified as compared to webpack-serve Features like HTML History API Fallback, Proxying, and Compression have support baked in, though we differ in that options are passed straight through to the underlying dependencies. That makes use and documentation much easier for the end user, as there's no intermediate layer to have to understand. Useful overlays for errors and warnings and progress are included out of the box, and were developed using a somewhat-standardized approach, and have a sexy, uniform look and feel for a consistent experience. I'd also argue that this approach is far cleaner than the others that preceded it. We've given a lot of consideration to how the feature set might be expanded and have put an architecture in place that should allow for new features to be supported without adding the kind of complexity that cripples maintainability. We learned quite a bit from the shortcomings of both webpack-dev-server and webpack-serve and made an effort to improve upon them. Why did you develop webpack-plugin-serve?# After parting ways with the webpack org, I took up an interest in Rollup. Partly because of requirements for a new professional position, somewhat because I've always been fascinated by Rollup's approach, and partly because I wanted to continue contributing to the bundler space. When I talked to those folks about joining on, we identified a few needs for the project. One of them happened to be a full-featured, robust, project-supported development server. And so I went about researching what already existed for Rollup and stumbled upon rollup-plugin-serve. Full credit where credit is due - I hadn't considered that a dev server could be self-contained in a plugin until I ran across rollup-plugin-serve. It is/was a brilliant concept. While still researching how I wanted to go about a Rollup development server, I was approached by a cadre of talented Brazillian developers (@matheus1lva, @sseraphini) who were bummed about webpack's decision to shutter webpack-serve. Plugin Based Approach for webpack# Their idea was to, at the least, fork and maintain webpack-serve. For Matheus and Sibelius, webpack-serve was a better choice than webpack-dev-server, despite its own set of quirks. And so they asked me if I'd like to help with this new effort. At that point, I didn't have much interest in doubling back to webpack, given my new direction of focus. But after discussions, the idea quickly emerged that this could be a platform to launch a standard experience for many bundlers, not just restricted to webpack. And at that point, I was hooked. Targeting webpack was the most logical starting point. I had accumulated a lot of knowledge about how a server should and shouldn't interact with webpack compilers and bundles by maintaining webpack-dev-server and authoring webpack-serve. Between that and the real-world, day-to-day user perspective that Matheus and Sibelius were able to provide, we were able to create something great. Much of the plugin is just "plumbing," but it's how that plumbing is arranged that makes this project a stand-out in my mind. What next?# Concerning this space, we're abstracting the codebase for the plugin into bundler-serve, which will act as a platform for bundler development servers. Matheus and I have already started the abstraction work, and we'll be targeting Rollup in the coming weeks as the first project to bind to the new platform. Eventually, webpack-plugin-serve will use webpack-specific bindings to bundler-serve as well. It's an ambitious plan, but I feel we've positioned ourselves nicely for a smooth path forward. Outside of the dev-server and bundler realm I have a giant list of npm modules that I'd like to release, for which there aren't existing or effective alternatives (@sindresorhus is my npm hero). And I'm going to continue to look to help out existing projects who need maintainers. markdown-toc is queued up shortly. What does the future look like for webpack-plugin-serve and web development in general? Can you see any particular trends?# I can see us adding more features, or developing the modules to provide pass-through features, as use-cases arise. The Node server space is so rich in functionality that we should be able to expand it quickly - or most importantly - provide excellent direction for how users can apply needed functionality, easily. Aside from that, the stack is pretty solid. I'm sure the webpack userbase will surprise us with scenarios we haven't considered - they always do. Predictions Are Difficult# As for web development, making bold predictions tend to be little more than hot takes. The "How to stay relevant in N years" trend on Twitter is evidence of that. And I've been awful at making predictions - I thought Twitter was a fad, I thought MySpace would always own Facebook, I passed on Bitcoin at $100US. If I had to make a somewhat educated guess, I would say that the rise of ES Modules will start to take off in 2019, and tools like Rollup will take more substantial roles, while tools like webpack will begin to be considered "legacy". The Rise of a New Generation of Bundlers# You can already see some evidence in that with the rise of projects like Fastpack and Parcel, and the continued, steady increase in popularity in Rollup. I also believe 2019 will be the year of GraphQL. While 2018 saw GraphQL gain wide acceptance, I see enough to suggest that it's going to spread like wildfire. pnpm, NestJS, Vue.js# I could also see projects like pnpm gaining a wider userbase, and influencing npm proper, just as Yarn has. NestJS is something to take a good look at, and I wouldn't be surprised if Vue.js made a more significant dent in the next year. Hopefully, that was sufficiently vague and will age well :) What advice would you give to programmers getting into web development?# Man, I started in 1996, and the path forward is so much different now. The way you take does a lot to influence how you move forward, and starting now would put me on a much different path. My path took me from platform/system development to (what we now call) full-stack to the front end to back end / devops. I'd that starting now is probably daunting. What you need to know these days as a front-end developer is staggering, and a bit ludicrous. So strictly for folks starting fresh: Ignore the trends. Focus on established tech to get up to speed. Trends change daily, established tech sticks around. Find a good source of learning, and stick with it. Back in the day, my source was HTMLGoodies and I still have a binder full of printouts (because nostalgia) that helped me to get going. Try to avoid a short attention span on information sources. That'll provide some consistency and continuity. Folks like Wes Bos genuinely care about teaching the people they're making content for. Choose a focus, don't try to do it all at once. There are legitimate geniuses out there, but I'm not one of them. It took me many moons to accumulate the bit of knowledge I have, and I did that by focusing, learning, and refocusing when I felt I had reached proficiency. Avoid "thought leaders" and anyone who calls themselves one. These folks are prolific on social media, and just like talking heads, they want to keep your mind spinning and stay in the forefront. They won't help you gain traction, but they will keep you context shifting. Start with the basics. If you're getting into JavaScript, really learn the basics first. Don't dive straight into frameworks like React, Vue, or Angular. The same goes for all languages. Adopt a code style. Your preferences will evolve, but it's good to use established patterns to get started. Learn how and why things work. When you understand the underpinnings, debugging becomes a touch less painful. So those are my truths. There's no possible way they're truths for everyone, instead of lessons I've learned over time. If they help someone out, that's great! Who should I interview next?# @matheus1lva! Brazil is producing some notable talent in engineering and JavaScript. Any last remarks?# We're just a few devs hoping to provide a better experience for everyone and appreciate folks spreading the word about it. Please don't hesitate to open an issue or hit us up on Twitter. Conclusion# Thanks for the interview, Andrew! It feels like webpack-plugin-serve fits a niche in the ecosystem. I like particularly the fact that the underlying architecture will enable collaboration across different bundlers. To learn more, check out webpack-plugin-serve in GitHub.

React Union - React for CMSs and Portals - Interview with Tomáš Konrády

React has uses beyond application development. One of the perhaps surprising use cases is to integrate it within a Content Management System (CMS) such as WordPress. To get a better idea of how this could work out, I am interviewing Tomáš Konrády. Can you tell a bit about yourself?# Lundegaard in Prague, living in Hradec Králové. Recently I have started to fell in love with open-source and the Ramda library. The result of that is a few projects. First of them is the ramda-extension where our core team of Ramdists created point-free utility functions composed with only Ramda functions. Second open-source project is a React Union, the topic of this interview. In my spare time, I either draw, play a guitar or exercise with a kettlebell. How would you describe React Union to someone who has never heard of it?# Purpose of the React Union project is to help with developing React applications that are situated in JavaScript unfriendly environments. What do I mean by these? For us, in Lundegaard, it is a Java CMS backend. For others, it can be any non-JavaScript CMS such as WordPress or Drupal. React Union project consists of three parts: <Union /> component - Union component is responsible for assembling one logical virtual DOM from physically distributed HTML fragments. react-union-scripts is our SDK for developing large applications. Boilerplates provide starting points for projects and they also show how to use the component and the scripts together. How does React Union work?# Assume, that code below is the output of your server: <html> <body> Generated content by CMS. <div id="news-feed"></div> <script data-union-widget="news-feed" data-union-container="news-feed" ></script> Generated content by CMS with nonpredictable markup... <div class="app-container"> <div id="customers-chat"></div> <script data-union-widget="customers-chat" data-union-container="customers-chat" ></script> </div> <script src="js/app.bundle.js"></script> </body> </html> Pay attention to the script tags with data-union-widget attribute. The tag describes which application should be rendered at which place in document (described by data-union-container attribute). Now let's look at our index file in which the Union components are used: import { Union } from "react-union"; const routes = [ { name: "news-feed", getComponent: done => ( import("./containers/NewsFeed").then(done), ) }, { name: "customers-chat", getComponent: done => ( import("./containers/CustomersChat").then(done), ) }, ]; const App = () => <Union routes={routes} />; export default App; Union component scans the HTML for our script tags - we call them widget descriptors. Then, combined with the route definition above, they will become React containers. The component utilizes portals under the hood so we can be sure that even though the components are physically rendered in different parts in real DOM, Union will assemble one logical virtual DOM for us. Then we can provide one context to all of our containers. We can share the application state, theme preferences, etc. to all of our containers. OK, why all the fuzz, why not to render the component directly? Let's imagine that we don't have control over the response from the server. For example, it can be a result of a CMS, where administrators can drag and drop whatever application or a widget to their views. We do not know in advance what our apps should be rendered. To sum up, the Union component allows us to define what React containers can users use in their system. The component will ensure that the right component will be rendered in the right place. I described just one single use case how Union could be used. But there is more you can do. For example, you can pass data from a server or even share common data across all rendered containers. How does React Union differ from other solutions?# I don't think that there are many other solutions available. I know just about react-habitat. The library is focused on isolated components that neither share context nor state. But there are surely ways of how to achieve the same (and better): "Just" change the backend technology entirely for technology that allows you prerender. Turn your CMS into a data source - we call this a headless CMS. But sometimes there is no budget to change backend technology, or you and that is where React Union shines. Why did you develop React Union?# At my work, there are half of backend programmers certified Liferay developers. They specialize in development for that platform. Liferay is the complex environment written in Java, and a big part of it is CMS. Our clients love using it, and our backend developers have both great inside and knowledge about it. Neither clients nor backend developers will stop to use Liferay soon. But I am JavaScript developer, and I don't care what backend technology is used. In advance, Liferay takes about 10 minutes to start. :) I wanted to use a solution that is agnostic to the CMS platform. The React Union is the result of that. What next?# Dynamic rendering of components is one thing, but state management in CMS environments is the second and maybe more complex one. In Lundegaard we love Redux (yes, we are going to use it even though React hooks are on the way :)). As a result, we started to opensource the redux-tools - our solution to modular Redux. It is the younger brother of React Union that we use alongside. What does the future look like for React Union and web development in general? Can you see any particular trends?# Yes, there are trends — both good and bad ones. Among the good ones, I consider the focus on the overall performance of web applications. We can speak either about the whole philosophy of the Progressive Web Applications or about the direction the React library is heading with focus on responsible GUIs. The next big thing is undoubtedly WebAssembly (WA). I think once WA will be well supported across browsers than there will start to sublime new remarkable ways and technologies in which we will develop with native performance. I have to say I am not a big fan of neither TypeScript nor Flow. Those two solutions are the way how to bring the static typing into JavaScript world. But I am aware that I stay behind the smaller group of JavaScript developers with the same opinion. But I would recommend to everyone from the other group of developers to take a look into Clojure and ClojureScript world. In there they understand (for a long time) that static typing is not the silver bullet for safe apps without bugs. What advice would you give to programmers getting into web development?# I would recommend them to dig into basics. It is essential to genuinely know HTML, CSS, and JavaScript before they start to add any of frameworks or libraries into their skill set. Who should I interview next?# Scott Sauyet aka CrossEye - One of the core members of Ramda. What are the plans according to the Ramda library? :) Brent Jackson aka jxnblk - Creator of my most favorite css-in-js libraries - styled-system and rebass - I would like to know what new project is going to release. Rich Hickey - Creator of Clojure - I am interested what would he say about TypeScript or FantasyLand. Any last remarks?# I want to thank all members of our small team that develop React-union project for their hard work! Namely aizerin, jamescoq, and wafflepie. Conclusion# Thanks for the interview Tomáš! I am not a WordPress developer but I can see how React Union could come in handy in that context and others. Learn more about React Union at the project site. See also React Union on GitHub.

Unicon - Wrangle SVGs from your favorite design tool - Interview with Travis Arnold

So far design and development have been considered separate disciplines. Recently tooling has begun to appear to bridge this gap. To understand more about the topic, I am interviewing Travis Arnold, the author of Unicon. Can you tell a bit about yourself?# How would you describe Unicon to someone who has never heard of it?# Unicon is a set of tools to help designers and developers sync SVGs from their favorite design tool to their codebase. How does Unicon work?# Unicon is split up into multiple packages to allow various use cases. I'll explain the main packages below: unicon# The unicon package, the core of the solution, is responsible for sourcing SVGs from any design tool. Right now there are tighter integrations for Figma and Sketch, and then an option to read a folder of SVGs for use with any other backbone of Unicon. unicon is responsible for sourcing SVGs from any design tool. Right now there are tighter integrations for Figma and Sketch, and then an option to read a folder of SVGs for use with any other tool like Adobe Illustrator or any tool that can export SVG. In the future, I would like to have options for any design tool that has an available API that Unicon can integrate with. In a simple use case, we can import one of three functions and gather raw SVG strings from our design tool of choice. Take a look at the following: import { getSvgsFromSketch } from "unicon"; getSvgsFromSketch("./path/to/illustrations.sketch").then(svgs => { // Now we can do whatever we want with the raw SVG data }); Each function adheres to the same style and signature, with some extras needed for some instances like the Figma API needing authentication. unicon-cli# Next, we have a CLI tool to help automate sourcing the SVG data and creating a file of exports. A typical script could look like the following: { "scripts": { "icons": "unicon figma 5X...2ge --name icons --transformer json" } } In this case, we are sourcing SVG data from a Figma file. We also pass a few options to specify the name of the generated file as well as how the data is transformed using the unicon-transformer-json package. unicon-react# Finally, the unicon-react package allows rendering SVGs universally in React and React Native with the same component. It understands the JSON chunks created by the unicon-cli tool. import Graphic from "unicon-react"; import { Archive } from "./icons"; export default () => <Graphic source={Archive} scale={4} />; How does Unicon differ from other solutions?# Like other solutions, Unicon works with exported SVGs. It does this by using the getSvgsFromFolder function, but I also wanted to support directly exporting from the tool itself if possible. Using the provided APIs from Figma and Sketch, Unicon can keep a small feedback loop between design and development. Why did you develop Unicon?# I was initially inspired by what Github does to manage their icon set. I liked the approach of using Figma's API to power an icon system and keep it in sync. I had previously made a webpack loader for Sketch, so I wanted to see if I could create something better that was more flexible and included all design tools. What next?# An official docs site is next on the list. I want to create as many real-world examples to make sure Unicon can fit multiple use cases. Eventually, I'd like to create an Electron app, so it's easier for people not as familiar with the CLI to manage their icon sets. Lastly, with the release of Github actions, I want to look into how Unicon workflows can be more automated. What does the future look like for Unicon and web development in general? Can you see any particular trends?# In the future, I'd like Unicon to have even better support with design tools and possibly package management for illustrations. There seems to be a trend of design tools popping up right now with everyone racing to be the next tool that exports directly to production code. In the end, I think everyone is bringing great innovation to this space, and I couldn't be more excited to be a part of it. What advice would you give to programmers getting into web development?# Always be experimenting and trying out new projects. Web development is one of the best fields to be in, with such a low barrier to entry, you can make your opportunities. I highly recommend contributing to open source if you have the time. I've learned more than I could have ever imagined since I started sharing and collaborating more. It's also pretty fun getting feedback and working with people all over the world! Who should I interview next?# Brent Jackson. I'm always impressed and inspired by the work he puts out. Any last remarks?# Thank you for this interview! I'm excited about the future of design tooling. I can't wait to see it continue to grow and get better. Conclusion# Thanks for the interview Travis! I expect to see a lot of action in the space as developers and designers figure out how to collaborate better. To learn more about Unicon, head to GitHub.

Experiences on HalfStack London 2018

I was invited to HalfStack London 2018 as a guest as we are considering bringing the conference series to Vienna. Therefore it was vital for me to get an idea of the format and how the event runs. It was my second visit to London. Although the city is a little rugged, I like it, and it's easy to get along. It's famous for having a vibrant technology culture, and it showed positively at the conference. Overview of HalfStack London 2018# Fun at speakers' dinner HalfStack is known for being a cozy conference that's held in a pub environment (a backroom of one) while being limited to one day and roughly 150 attendees. I like the single track format they chose, and I feel the schedule of the conference had been designed well. The day started with an inspirational talk by Chris Heilmann, then went into technical topics, after which it changed gears into more fun talks and eventually the after party. The progression made sense to me as you tend to get more tired over the day. I might have preferred to have longer breaks and perhaps fewer speakers so this could be something we'll do differently in a Vienna edition. The conference had a different feel than the others I've visited. It felt communal, and the fact that there were a quiz and other community-building events included mirrored this. The venue was a little grungy in a pure London style. Some supports were obstructing the view, and the projector wasn't the best although it wasn't the worst one I've seen either. It might make sense to have a television in the back and mirror the slides there as now it was challenging to follow from there although the sofas provided in that section were nice. The venue also contained a bar from which food and drinks were served. Cafe 1001 - The venue Bringing best practices front and centre - Chris Heilmann# Chris Heilmann The conference started with a talk by Chris Heilmann. The key point I got out of the session is that we should push as much to tooling as possible. In particular, he mentioned webhint scanner that has been designed to support web development. I agree with the sentiment, and this is why tools like Prettier gained traction as they allowed us to eliminate entire debates while enabling us to focus on what remains. The present and future of VR on the Web - Ada Rose Cannon# Ada Rose Cannon Ada Rose Cannon discussed the future of VR in the web. It was nice to see what's coming although it feels like it's still early days with the technology especially given the standards are still emerging. That said, it's the right time to start looking into the topic if you want to be amongst the first to adopt it. Also, it's a good chance to affect the development of technology by participating in the standards development. Building Bots with JavaScript - Alex Lakatos# Alex Lakatos Alex Lakatos discussed the emergence of conversational user interfaces. The point was that a UI doesn't have to be static. Instead, we can code fundamental interactions to a bot that behaves in a human-like manner while remembering what we've told it before. Doing this means we could tell a bot, for example, to book a flight to Berlin for the next Sunday as usual. Based on that it would propose a trip that fits our patterns of booking before and suggests scheduling it. I can see potential in this type of approach as it's also more accessible than the current options when done right. Fast But Not Furious: Debugging User Interaction Performance Issues - Anna Migas# Anna Migas Anna Migas discussed how to approach user interface performance and to optimize it. The key learning for me was that there's still something for me to learn in-browser developer tools. npm install disaster-waiting-to-happen - Liliana Kastilio# Liliana Kastillo Liliana Kastilio discussed the recent security disasters that have happened in the IT sector. Security shouldn't be an afterthought. Instead, it's something to take into account continuously. As we bring packages to our projects, we also increase the possibility for a security breach. It's in part an awareness issue but also tooling and a process level problem. It's tricky to solve as packages evolve unpredictably. At the same time, the amount of available packages keeps going up. Enter ES2018 - Andrico Karoulla# Andrico Karoulla Andrico Karoulla focused on the new features available in ES2018. The key point I got out of the talk is that finally, we have a way to share memory across multiple processes. Earlier this has been a problem point when parallelizing execution and I am glad to see it has been resolved. 100% CSS Mario Kart - Stephen Cook# Stephen Cook Stephen Cook's presentation gets my award for the most imaginative use of CSS. He implemented a subset of Mario Kart in using CSS and a tiny bit of JavaScript. He found a way to encode sprite textures within CSS definitions and then animate between them. Especially keyboard controls were smart as he was able to implement them with selectors crafted carefully. CSS is starting to feel like a complete programming language to me. If you know what you are doing, you can do a lot with it. Buying a House with JavaScript - Sean McGee# Sean McGee Sean McGee discussed how he uses JavaScript to hunt for a house fitting his criteria. So far he hasn't been lucky, but it was intriguing to see how he combines multiple data sources to generate a visualization to show where possible candidates might exist. The nice thing is that he managed to automate so that if his criteria are met, he'll receive a text message telling him to purchase the property. Home Automation with JavaScript - Jonathan Fielding# Jonathan Fielding Jonathan Fielding focused on home automation and how to approach it with JavaScript. The field is currently fragmented, and usually, hardware doesn't work with all possible standards. The fragmentation makes it tricky to integrate multiple solutions although it seems to be possible if you are smart and can develop JavaScript APIs on top of proprietary UIs. Reanimating the Web - Rob Bateman# Rob Bateman Rob Bateman discussed his transition from Flash to JavaScript. It was impressive to see the results. One of the outcomes his work is AwayJS, a 3D graphics framework. The Most Important UI: You - Carolyn Stransky# Carolyn Stransky Carolyn Stransky focused on the crucial topic of self-care. It's too familiar in the industry for people to push too far and burn out. I felt the talk fit the conference well as it's a topic that requires awareness. Cats vs. Dogs - Tom Dye, Dylan Schiemann# Dylan Schiemann Tom Dye and Dylan Schiemann gave a breather type presentation and figured out whether the audience prefers cats or dogs. There wasn't heavy content included, but the session had its place before more content. JavaScript at the edge - Cameron Diver, Theodor Gherzan# Cameron Diver and Theodor Gherzan Cameron Diver and Theodor Gherzan discussed how to develop and deploy JavaScript on hardware. The talk included live demonstrations, and it was cool to see how it all went together as audio waveform was mapped to multiple LED displays. This Talk Is About You - Jani Eväkallio# Jani Eväkallio Jani Eväkallio's talk was about shifting perspective to why we are developing. Too often the focus of discussion is on technical details, and we can forget about why we are in the business in the first place. Individual decisions matter and through our personal choices we shape this world. Beats, Rhymes & Unit Tests - Tony Edwards# Tony Edwards Tony Edwards' talk was perhaps the most entertaining one. He is an amateur rapper, and he wanted to see if it's possible to transcribe his rapping to a computer using JavaScript. He did this through a set of experiments, and finally, in the end, something was working. He also got the crowd involved which was great. Alpha, Beta, Gamer: Dev Mode - Joe Hart# Joe Hart Joe Hart's presentation was another entertaining one. He discussed the history of games and had several live examples. We played Flappy Bird together and had great fun overall even if the last demonstration failed due to a coding issue. Conclusion# Chilling at the venue I like the concept of HalfStack. It's a small community conference that seems to have found its place. I feel it was a great showcase especially for the local community and it was amazing for me to be a part of it for a short while. Find my photos of the event online.

React India 2019 - The international React.js conference in Goa, India - Interview with Manjula Dube

It is exciting to organize a conference, especially when it's the first of its kind in a region. To learn more about one such event, I'm interviewing Manjula Dube from the React India (22-24.08.2019) team. Can you tell a bit about yourself?# Manjula Dube I am a software engineer based out of Berlin. I also teach React/Javascript at Code University and Le Wagon. I love to code like I love Indian food. I organise Mumbai Women Coders, GDG Berlin, React India. In my free time, I mostly spend time working on open source and helping developers in the community. I am also a conference speaker who loves GraphQL, React. How would you describe React India to someone who has never heard of it?# React India is a community-led non-profit initiative with an international flavor. The first of its kind in India, the event consists of a workshop day and two days of talks around the topic. It is a three-day conference with the first day focusing on workshops and the next two days talks on things around React, React Native and GraphQL. This edition will be gathering front-end and full-stack developers across the globe to Goa, India. In this single track event, you will learn more about React and surrounding topics while meeting some of the leading talents around the globe. In addition to enjoying the conference, this is your chance to explore Goa. Regular and lightning talks will cover various React and frontend topics including React Native, GraphQL, VR & AR, Redux, Preact and more! What does React India offer?# React India will be one of the first Indian conferences to have over 25+ speakers flying from over 15+ countries which count for 10+ International speakers and over 15 Indian speakers. React India will be the first-ever conference to hold this vast and massive community-led effort. We are also focusing on diversity and getting a lot of female speakers on board. We offer our attendees the quintessence of professional front-end engineering knowledge throughout the event with multiple workshops and single track followed with two days talks by experts. How does React India differ from other events?# One for all, all for one. React India is a non-profit conference which is organized by developers for the broader developer community. We make a difference by focusing on curated developer content and while enjoying the event together as a community. We are committed to building a diverse and inclusive conference for everyone. React India has a strict code of conduct which is enforced on everyone who is part of it. We make all organizers, sponsors, speakers, volunteers, delegates, and attendees of all levels to adhere to our Code of Conduct. Here's our annual conference accessibility statement. Why did you decide to arrange React India?# There is a sizeable Indian community of developers who are eager to learn but lacked International exposure. React India will be the platform for establishing the global connect and showcase the power of this fantastic technology format. What next?# React India's goal is to become India's annual international conference which will aim to grow each year from its inception. We plan to reach out to more Indian developers in upcoming years. React India provides a platform to both network and to share your knowledge with global developer ecosystem. What does the future look like for React India?# I haven't thought that far yet but will be doing this conference every year so that we have both international and local flavor in our Javascript and React community. I want to bring in more women developers to speak out on React / Javascript topics. My ultimate goal for React India is to continue growing our community by inspiring a whole new group of people to join the amazing developer community! What advice would you give to programmers getting into web development?# First of all, programming is not hard. Despite what people say, you don't have to be a genius to learn to program. Avoid copy and pasting even if it takes long enough to write up a necessary line code. Stick to it and work on it until you completely understand it. I would say keep learning don't look back. Ask for help in the community. There are a lot of people in the community to help. My last advice will be "be determined and focused". Sometimes you will feel on the top of the world, sometimes times you will question yourself. Whatever highs and lows you face, keep practicing, keep trying, keep learning because that feeling when you achieve something will make you more proud of yourself. Always remember - "The hardest path has the greatest payoff." Who should I interview next?# Maybe some developers in India who are already excited about the conference or speakers such as (Ken Wheeler or Jani Eväkallio) about how they feel about coming to India for this conference. :) Any last remarks?# I want to thank all the individuals behind the scenes who are working hard to make this conference an awesome one. Thanks to second organizer Sahil Mhapsekar for helping me make this happen. Lastly thanks to our sponsors React Finland and React Alicante. Come forward to show your support for this non-profit initiative by self-motivated individuals from the developer community. React India is looking out for sponsorship. Through sponsorship support, we hope to invite as many deserving people as possible who would otherwise be unable to attend. Let us together create a meaningful experience for everyone. Learn more from the sponsorship deck. Conclusion# Thanks for the interview Manjula! React India will take place between 22nd and 24th in August of 2019 in Goa, India. You can learn more conference at React India site and get news through @react_india on Twitter.

Experiences on WebCamp Zagreb 2018

I was invited to WebCamp Zagreb as a speaker and a workshop trainer. I gave one of my webpack workshops at the event and discussed static websites and the way I see them in my talk. It was my first time in Zagreb, and I was pleasantly surprised by the city. Zagreb reminds me of Prague or Vienna architecture-wise. There are fewer tourists and no subway, though. Price-level tends to be low, though, and using a taxi is a realistic option. Overview of WebCamp Zagreb 2018# Oh, well There were close to a thousand people at WebCamp Zagreb. The event was sold out. One unique aspect of WebCamp Zagreb is that the community organizes it. There isn't an equivalent event in Vienna for example. The conference has been running for many years, and during this time they have honed the process and the conference. The conference started with two workshop days. After that, there were two days with talks and an afterparty to end it all. The schedule was relaxed and the talk lengths varied from roughly 25 minutes to 45 minutes. The conference was split into two main tracks and a smaller unconference one. The conference venue had one big hall that was divided into the half after keynotes of each day. Sometimes you could hear some noise (applause, laughter) through but that wasn't a big issue. Overall the arrangement felt fine, and there was enough space in the venue for all the people. The topics were varied. There were technical, design, and business talks of different levels. In this way, it reminded me of WebExpo, and I like the concept as it allows you to get exposed to more ideas. You learn more. Overall, the talks had a high quality, and there were a couple of pleasant surprises included. Day 1# Selfie by proxy I traveled to Zagreb a day before my workshop on Thursday. Next time I might reserve a day or two more to enjoy the city more. The accommodation arranged by the organizers was in an apartment hotel right next to the venue. The apartments were close to new, and there was plenty of space. Compared to what conferences usually offer to their speakers, this was luxury. Taking CI and automated testing seriously - Alen Ladavac# Alen Ladavac The first conference day started with a keynote by Alen Ladavac, the CTO of Croteam. Croteam is famous for developing the Serious Sam franchise and The Talos Principle in addition to many other titles. In his talk, he discussed how game developers approach Continuous Integration and automated testing. Unlike in the web space, there isn't established tooling for automation and testing is a severe problem. Often studios end up developing their solutions. That's what Croteam did. According to Alen, it is highly essential that gamers can complete a game and they don't get stuck due to a bug or a design issue. How to ascertain this for a complex game like The Talos Principle? Alen suggested a couple of ways - using bots, allowing game testers to add notes to the game world in-game. Although puzzles of a game might be too challenging for a bot to solve the game, they can follow recordings made by people. The advantage of using bots is that instead of having people to play through the game after each change, they can be run on a server while using accelerated time within the game engine itself. A bot can be able to complete a 10-20h game within a twenty-minute run as a result. Bots cannot replace human testers, but they can complement the effort and allow people to focus on other aspects such as "do things look right". People are also sometimes more clever in finding issues. To make it convenient, Croteam has implemented bug tracking within their game engine so you can attach notes to game assets and see notes other people have done. The lesson for web developers is two-fold here: 1. It likely makes sense to run a bot against the app/service you are developing (acceptance testing ). 2. Making it easier for the users to report issues gives you higher quality issues. Property-based testing is a mindset - Andrea Leopardi# Andrea Leopardi Andrea's talk was about property-based testing. It's a testing technique that can be used to generate test cases, and I've used the approach in the past and written a solution of my own even. Although I knew the basic idea, it was good to get a recap. Property-based testing relies on invariants for values and for property to test. Consider array sorting for example. You know that at least properties such as the following must hold always: The output must have the same length as the input. The output must contain the same items as the input. The output must be in ascending order. Also, we have to define the types of parameters for the function to test. We could say we are sorting an array of integers for example. The information is required to generate a pseudorandom array passed to the invariants above and to check they hold. If a test fails, proper implementation can reduce the failure to a minimal example. The problem is that often these invariants can be challenging to discover. Therefore the technique should be seen as complementary to unit testing. The most significant difference has to do with input value coverage. A property can cover a broader range of values and find corner cases you might otherwise miss. To solve the discovery problem, sometimes it's possible to use another implementation as an oracle. You could try to improve performance this way for example. Designing with Accessibility in Mind - Bermon Painter# Bermon Painter Bermon discussed accessibility and its impact on design. If accessibility is considered in the design, it can open your software to new groups of people. Accessibility isn't a zero-sum game as even so-called regular users can benefit from the improvements. Sometimes you may also be required to implement specific measures by law. Considering accessibility in the design phase may add to the initial cost of development and require different development practices. That's something I would have loved to learn more. A Token Walks Into a SPA... - Ado Kukic# Ado Kukic Ado's talk discussed the authentication flow of JSON Web Tokens. I gained a couple of insights: Authentication server that gives the keys against successful authentication should be as a separate service. You should never store authentication related information to local/session storage. Instead, it makes sense to have it in-memory (think Redux or a similar state management solution). To solve the problem of refresh the browser, a session cookie on the authentication server is enough. It should be able to renew the auth on request after refreshing. I would have loved to see a live demo, but apart from that, the talk was excellent. Where are the women? - Dora Militaru# Dora Militaru Dora's talk was about the absence of women in the industry. Perhaps the key point for me was that it's not only about fixing some percentages. Instead, it's at least as important, if not more, to be inclusive instead of only diverse. Having a diverse environment isn't enough if old decision structures remain in place. There's more structure than what's apparent. Depending on the action, different structures of a network become active. And that's where inclusivity comes in. Fixing the gender gap also means adjusting the structures and based on that gender quotas may be justified until the structures have been repaired. Dora mention Project Implicit. It's a project by Harvard meant to discover social attitudes. Overall, the talk was thought-provoking and provided a new perspective to the discussion. How did we open source our knowledge and practices - Robert Šorn# Robert Šorn Robert discussed how he changed the knowledge processes of his company by starting an effort to document. It started small, but once people realized the value, his Git wiki-based approach started bearing fruits. Eventually, people developed it into a website and added features such as search to it. Sometimes a lot of these things are implicit. Even though a process hasn't been documented, doesn't mean there isn't one. Exposing what's known allowed people to improve the content further. One side benefit was that onboarding new people to the company became more comfortable as a result. Headless architecture and the future of websites - Heidi Olsen# Heidi Olsen Heidi's talk discussed how her company approaches technical problems by allowing multiple different technologies used. The critical point here was to maintain a headless API in between to separate the frontend from the backend. The approach is similar to what I use to develop my sites these days. She also mentioned the strangler pattern that's useful for refactoring legacy software by writing around it and eventually removing the old parts. Day 2# Random statue Day two of the conference continued in the same format. Nitpicking terminology: are we using the right terms - Miro Svrtan# Miro Svrtan Miro's keynote was about the terminology we use and how we interpret them. What agile means to someone, isn't the same to someone else. The critical point was that it's good to consider what terms we use and how we use them. Writing Superpowers for Geeks - Ivan Brezak Brkan# Ivan Brezak Brkan Ivan discussed why writing is an essential skill for programmers to acquire. By being able to write and articulate yourself well, it's easier to get complex messages through, and most importantly to respond appropriately to trolls. Pricing shouldn't be hard - Vladimir Bogdanić# Vladimir Bogdanić Vladimir's talk was one of the highlights of the conference for me. He discussed how he approaches business and pricing in particular. One of the key points was how to approach proposals. You should be clear on the value to the client and make it easy to understand the exact offering. By articulating well, you can charge more since there's more value on the table. Pricing-wise he mentioned that you should charge a fixed price for known quantities and hourly one for something that's not. Sometimes you might go from hourly pricing to a fixed one once you have enough experience. It's a different model as then you tie your price to the value delivered instead of hours worked. The Power of Persuasive Design - Alex Kuhn# Alex Kuhn The critical point of Alex's talk was that design could be used to persuade or dissuade people to perform specific actions. That's how calls to actions work since they shape attention. Discouraging is another pattern as they try to convince the user not to do something. Sometimes these are so-called dark UX patterns as they play against the user. Alex's talk helped me to understand that design is also about motivating people. Indirection can be the way as for some things, like getting fit, there aren't any shortcuts. Instead, you should help the users to develop habits and encourage them as they reach towards their goals. Designing for impact - Vladimir Koncar# Vladimir Koncar Vladimir discussed how he grew his web business over the years and there was a lot to digest. One of the key points was that even the best design wouldn't help you if there's no business case. Sometimes design can turn business into something great as they did with a language learning platform. By addressing the problems in design, they pushed the existing platform into new heights. Delivering Fast and Beautiful Images and Video - Doug Sillars# Doug Sillars Doug's talk focused on how to get more performance out of images and videos on the web. It's a domain where there are a lot of low hanging fruits ready to be picked since people tend to miss these optimizations. At the simplest level, you should compress your assets well. You should also consider asset dimensions and load them only if needed. Fluid typography - Sebastijan Dumančić# Sebastijan Dumančić Sebastijan's talk showed how to achieve fluid typography without ugly breakpoints. It's one of those techniques I'll try out in my work as then I'll have more control over what a site looks like in different resolutions. Conclusion# Resting at WebCamp Zagreb WebCamp Zagreb was a new acquaintance to me, and if there's time, I'll try to go back next year. I liked the variety of topics present, and the venue was great as well. The conference is quite inexpensive compared to others with a similar caliber, and I have no doubt they'll be sold out next time too. In short, WebCamp Zagreb is a quality conference, and the city is excellent also. Find my photos of the event at Flickr.

Experiences on WebExpo 2018

I was invited to WebExpo 2018 as a visitor since I spoke at the conference last year. Prague is one of my favorite cities in Europe, and the conference was good last year, so I decided to go again. Like last time, also this time most of the attendees were local, and I estimate there were roughly a thousand people in total if not more. The event was held in the same place (Lucerna) as last year. Although a shopping mall busy with people isn't the most obvious place for a conference, I feel it works quite well. My main problem has to do with the acoustics of the main hall. They had the same problem with echo as last year, so it made it annoying to listen to the talks over a more extended period. The other halls didn't have the problem. Overview of WebExpo 2018# Prague Like last year, the conference ran four tracks. The fourth track alternated between workshops and lightning talks and I didn't spend much time on it although the topics were interesting. I like the format as often purely technical conferences can get a little annoying. I gain more from design or business talks. During this year's event, there were many insightful talks about design in particular. My only problem with the schedule was the lack of lunch breaks. It felt a little sad to skip a talk, and it mustn't have been nice for the speakers either since people need their lunch. Perhaps the way to solve this would have been to condense most talks to 30-minute format instead of 40 and putting more emphasis on lightning talks. Sometimes the talks were forced to the long format without adding much content, and sometimes the speakers knew not to push it by cutting the talks to 30-35 minute range which was fine by me. Instead of taking questions publicly after a talk, WebExpo uses the concept of a speakers' corner where you can discuss personally with a speaker. I think it's an excellent alternative to a public QA which can get awkward sometimes. I also like the way YGLF Kiev solved the problem by having a conversation with the host after the talk. WebExpo provided pastries and something salty to eat in the mornings. There were also reasonably paid lunch offerings and one option where you could get a hot dog against a registration (seemed to be Czech only so I skipped it). I didn't find paid lunch to be a problem as it's quite easy to find a nice meal in Prague for a reasonable cost. Day 1# Given I traveled from Vienna by train in the morning, I had to skip a couple of the morning talks. Perhaps next time I'll arrive in Prague the day before. Microcopy: Your Users Will Fall in Love - Kinneret Yifrah# Kinneret Yifrah I started my day with a talk about Microcopy by Kinneret Yifrah. Her main point was simple; there are opportunities for copywriting everywhere. Instead of writing in technical terms, we should write in human terms instead. It's these small things that count. Whether you give a technical warning or error or a humane one, can make a massive difference to the user experience and brand perception. Lead with Design - Diamond Ho# Diamond Ho The next talk I attended, was about leading with design by Diamond Ho of Facebook. She is working in the intersection of product and design and went through the design thinking process they use at Facebook. I learned from the talk that design is a non-linear challenge even if you present it as a linear process. The key seems to be in experimentation. For an organization like Facebook, it's essential to be able to run experiments effectively so better results can be reached considering the different stakeholders. Another point she made is that sometimes simple design can mean a massive amount of work. What's simple from the user, can mean countless work hours for the developers implementing the feature and the changes required can cascade through the entire system. Due to this, experimentation plays a significant role as you don't have an infinite budget to use. Therefore you have to choose your experiments well. How Can Tech Teams Help or Destroy a Startup? - Jan Řežáb# Jan Řežáb I got most of Jan Řežáb's talk about the Czech software industry accidentally, but it was interesting nevertheless. There's a sizable developer shortage in Czech (up to 50.000 developers, can it be true?) and also there's an attitude problem. According to Jan, demotivated developers in Czech prefer to remain in their companies while doing side work for extra pay. It's something that's hurting the industry, and his advice is that people should stop doing it. I can see why but it's maybe not an easy problem to solve. Pitching to Win - Christian Barnett# Christian Barnett Christian's talk about pitching was the highlight of the conference for me. The speech reinforced some of my beliefs about pitching, and he highlighted techniques such as reframing the problem and putting emphasis on how people will hear you. The main point is that everyone will intercept your message through their viewpoint and needs. Therefore what is heard and gets through is different per person. He claimed that many people put a lot of effort into creating something but not enough into how to communicate the creation. By putting more focus into pitching and how the idea is presented, has definite value as if the point isn't made correctly, it won't go through. Christian didn't use any slides in his presentation (well, technically one to show his name) and I found this captivating. You indeed had to focus on what he had to say, and I noticed he was using several techniques, such as repetition, in his talk. He had a set of ideas in mind he wanted to get through, and I think he managed to do that most excitingly. Design Systems at Scale - Rich Fulcher# Rich Fulcher Rich Fulcher from Google discussed how they implemented design systems (the famous Material Design) in Google and the challenges they faced. Given I'm a design system convert by now, there wasn't that much new in the talk for me. The critical point was that when an organization is significant, keeping a system in sync can become a challenge. Different units might have different ideas and needs which need to be taken into account. A design system captures concerns that might otherwise be solved differently and again be all over the place. In short, design systems allow capturing everyday needs into patterns which may be reused across the organization. Live Demo: React.js Portals and Modern JS Apps for CMS - Tomáš Konrády# Tomáš Konrády I was curious about Tomáš's talk as the topic was close to my heart given the way I develop websites these days. I've ended up using GraphQL as my CMS through GitHub and then rendering that to static HTML through React. I've found the approach fits my needs well and likely scales to new requirements as time goes by. Tomáš did mostly the same but from a different point of view. He demonstrated how to start from a CMS such as WordPress and then integrate React with it. The result of his exploration was React Union. To tell you more about the solution, I hope to interview for this blog shortly. Config-less, Server-less, Effort-less - Guillermo Rauch# I wanted to see Guillermo's talk but given his speech was delayed due to a delayed flight and I was hungry, I decided to skip it and explore Prague instead. It was time to find some food, so I went to my favorite place. Guillermo's talk and the slides are online. Day 2# A conference host Day two of the conference continued on the same format. I skipped the morning keynote and started my day at 11. The Cost of (not) Testing - Lukáš Linhart# Lukáš Linhart Lukáš's testing talk was one of my favorites of the conference. The main point for me is that testing is an integral part of the culture. Testing pyramid alone doesn't capture enough as it provides an only technical view of the topic. There are several layers underneath which matter more. You should always test the aspects you cannot afford to break. Testing should be seen as a way of managing risks and defining what is acceptable in a business context. Can you afford to lose a few hours, minutes, or seconds of sales? These are the kind of limits you have to figure out and then test accordingly. After Lukáš's talk, I went for a long lunch with a good friend and resumed the day with a presentation about workshops. How to Design a Design Workshop - Bob Marvan# Bob Marvan Bob's talk was about how to use design thinking to design a design thinking workshop. The core point of the talk is that the techniques used in a design workshop can be useful for developing the workshop itself. He also made critical points about managing people and distributing responsibility. He also made a point about energy. If you start a workshop strong, it will be easier to sustain a high energy level during it although it might inevitably go down over time. His point about using psychology to make people learn was great. Workshops are about compressing people concerning focus, and that's what you should use to your advantage to gain good outcomes. Design is Communication - Pascal Heynol# Pascal Heynol Pascal's talk was about his progress as a designer and what design means to him. He re-iterated Shannon's theory of communication and pointed out design follows the same model since the purpose of design is to communicate an idea. Considering that, you can evaluate your design and the aspects affecting it. CSS in JS: Patterns - Vojtěch Mikšů# Vojtěch Mikšů Vojtěch's talk was about the adoption of CSS in JS within Cloudflare and how they achieved the transformation. Given I am going through a similar change in my work, it was interesting to notice the parallels. His problems were in larger scale given he works as a part of the large engineering organization. In that case, the critical point was documenting the approach well beyond the official documentation. He stated that the solution you choose doesn't matter as it's more about mindset since it's relatively easy to move between the technical solutions as needed. Constant Curiosity: How Brands Earn Trust by Asking Firecracker Questions - Jon Burkhart# I skipped Jon's talk but the presentation and the slides are online. My current work isn't brand oriented, but I may check the talk later when it becomes more relevant. Conclusion# The famous Lucerna horse I think WebExpo was a solid event this year too. I hope they make space for lunch in the next year's schedule and compress most talks to a shorter format (personal preference). I also hope they manage to solve the acoustics issue in the main hall. Apart from these minor issues, I look forward to the event again.

Refract - Manage Component Side Effects the Reactive Way - Interview with Thomas Roch.

It's difficult to write an application without side effects. Consider handling requests, dealing with third parties, managing storage for example. These concerns come up often. Refract by Thomas Roch provides a solution to the problem for React users. Can you tell a bit about yourself?# router5. Read the router5 interview to learn more about it. How would you describe Refract to someone who has never heard of it?# Side effects are one of the primary sources of scalability issues for an application: in React lifecycle methods, and event handlers are used to handle side effects (making requests, persisting values in storage, pushing events to a third party vendor, etc.). Refract allows you to manage side-effects in React (in Inferno and Preact too), using reactive programming: you can observe props a component receives, and decide to take actions on what is observed. For instance, you can observe a text input, debounce it and make a request to receive a list of suggestions: components like a typeahead are straight-forward to implement with Refract. How does Refract work?# Refract offers a high-order component that you can use to wrap a component you want to observe. You need to supply two functions to your HoC: A function we call "aperture": it is there for creating a stream of effects (as data). Each value emitted by your stream of effects will be passed to a second function called "handler" which will trigger side-effects imperatively. A simple example is: you have a multi-step process, and you want to reset the page scroll position each time the step changes. Let's say you have a component called MultiStepProcess which takes a step property, with Refract you would achieve it this way: import { withEffects } from "refract-rxjs"; // The aperture observes step changes, and for each step change // will emit an effect const aperture = initialProps => component => component.observe("step").pipe(mapTo({ type: "SCROLL_RESET" })); // The handler handles the effects emitted by the aperture const handler = initialProps => effect => { if (effect.type === "SCROLL_RESET") { window.scrollTo(0, 0); } }; // Finally, we wrap MultiStepProcess const MultiStepProcessWithEffects = withEffects(handler)(aperture)( MultiStepProcess ); In your aperture, you can observe values of a prop or arguments a callback prop is called with. Also, you can use any other data source you have available: a redux store (we offer redux bindings to observe actions and selectors), data from a WebSocket, global events, etc. The example above can be extended to fetch data, push analytics events, etc. To learn more, see the online examples. How does Refract differ from other solutions# Refract can be compared to solutions commonly found in the Redux ecosystem (redux-saga, redux-observable, redux-loop) but without the need to use Redux. Redux has seen a lot of innovation in managing side effects due to Redux stores being easily observed (actions with middlewares, the state with subscribe). Refract makes React components observable, and it enables to colocate side-effect management with components: business logic can be isolated per view rather than being centralized, applications become more comfortable to code-split and therefore easier to scale. Refract is a great way to make use of reactive programming techniques in React, and we've made a special effort to have first-class support for multiple reactive libraries (RxJS, xstream, Most, Callbag): we publish fully typed individual packages rather than the main package with adapters (which lacks type safety). We have a document comparing Refract to alternative solutions. Why did you develop Refract?# It's been a long process: what has been open-sourced is the fifth iteration of what we have developed internally at FanDuel to help scale our main application. I was introduced to reactive programming (and React) four years ago: router5 came from starting to learn reactive programming. Three years ago I began to work with a colleague on a new product. Initially, I was tempted to use CycleJS. We ended up using React with Redux due to the ecosystem and the benefits of functional programming. React, and Redux was seen as a "safer" choice. I was however still very keen to find opportunities to leverage reactive programming. What is now Refract started with a few thoughts around handling analytics in our app (see Redux analytics, without middleware). Through this work, I began to understand the importance of observability, and I started to colocate observing behaviors to components. Initially what was developed was for analytics only. The first iterations were forced by the various use cases we discovered until we eventually came up with Refract: a generic way to handle side-effects using reactive programming. What next?# I'm not sure I know. We are going to improve examples and add more to our current documentation. If Refract integrates well with existing codebases, We are looking at providing a local state solution to help new projects. The API surface of Refract is tiny, so I don't think it will evolve much. With React, Suspense is next: I am currently exploring ways for Refract to leverage it. What does the future look like for Refract and web development in general? Can you see any particular trends?# Reactive programming is becoming a hot topic, and side effect management is an excellent application for it. Refract makes it possible to adopt reactive programming consistently gradually, and I hope it will help a lot of people getting started with it. The full data journey, from user to persistent storage and back is always evolving and getting simplified. In general, the libraries of today tend to be the implementation detail of tomorrow. A library like Redux is a good example: it has enforced good habits in state management (and continue to do so), but it will tend to become a pattern inspiring superior abstractions (as a cache in data fetching libraries like Apollo, in new local state abstractions leveraging React context API, ...). What advice would you give to programmers getting into web development?# The journey is long so staying curious and critical, don't settle, be opinionated but not dogmatic, collaborate with people around you, and listen to your guts! Quite often we have good intuitions: it is by learning, discussing and experimenting with others that we develop knowledge, gather experience and innovate. But all of that without losing focus on the primary goal: crafting great experiences for users! Who should I interview next?# I like Robin Frischmann's work and in particular Fela, a CSS-in-JS library we've been using successfully for more than a year. Any last remarks?# Thanks for the interview! Conclusion# Thanks for the interview, Thomas! Refract provides a robust solution for managing application side effects in reactive style by the looks of it. To learn more, head to Refract site and check the project on GitHub.

DSS - Deterministic Style Sheets - Interview with Giuseppe Gurgone

CSS is perhaps one of the most controversial parts of web development. For some, it's the favorite, for some the least pleasant part. As a result, many solutions have appeared around it to make it more palatable to web developers. To learn more about one, this time we'll learn about DSS, a solution by Giuseppe Gurgone. Can you tell a bit about yourself?# Giuseppe Gurgone My name is Giuseppe, and I am a front-end engineer from Sicily, Italy. In the past I worked for Yelp on their frontend core team, I am a core team member of SUIT CSS and co-author of a CSS-in-JS library called styled-jsx. If it wasn't clear, I like to build front-end infrastructure and CSS libraries. 😅 How would you describe DSS to someone who has never heard of it?# DSS - Deterministic Style Sheets - is a superset of CSS that can be compiled to atomic CSS classes. In addition to producing incredibly small bundles, atomic CSS classes can be exploited to bring deterministic styles resolution to CSS. For the ones who are not familiar with the concept, deterministic styles resolution means that styles resolve and affect an element based on their application order rather than cascade or their source files order. <!-- text is green --> <p class="red green">hi there SurviveJS friends</p> <!-- text is red --> <p class="green red">hi there SurviveJS friends</p> In my opinion this way of using styles is more powerful and predictable, and apparently I am not the only one who thinks that: This is definitely how I thought css worked when I first read the spec in ~2002/3 - Nicole Sullivan 💎 (@stubbornella) - July 19, 2018 How does DSS work?# DSS is similar to CSS Modules, and it is language agnostic. You write styles in regular .css files and then compile those with the DSS compiler to produce a single tiny bundle of atomic CSS classes that you include in your application via link tag. Like CSS Modules, for each CSS file, the DSS compiler produces a JSON file (or JS module) which maps the original selectors to their corresponding atomic CSS classes. .foo { margin-top: 30px; color: black; font-size: 10px; } .bar { color: green; font-size: 345px; } { "foo": [ "dss_marginTop-30px", "dss_color-black", "dss_fontSize-10px" ], "bar": ["dss_color-green", "dss_fontSize-345px"] } Above is what you import in your templates when you want to use the DSS styles. You can then consume those styles using a helper that merges the atomic CSS classes arrays right to left like Object.assign does in JavaScript. // DSS also comes with a webpack loader if you are using it in JavaScript. import styles from "./my-component.css"; import classNames from "dss-classnames"; document.body.innerHTML = `<div class="${classNames( styles.foo, styles.bar )}">hi</div>`; Above produces: <div class="dss_marginTop-30px dss_color-green dss_fontSize-345px" > hi </div> Merging is done (right to left) using the first occurrence of a property, e.g., dss_color and ignoring the others. Thanks to the low specificity and naming scheme of the atomic CSS classes, DSS can guarantee that styles are resolved in application order, i.e., deterministically! Note that the classnames helper can be implemented in any language. How does DSS differ from other solutions?# DSS is just proper old static CSS compiled to atomic CSS classes. Many love atomic CSS classes based solutions like Basscss, Tachyons, and Tailwind CSS. While I like how productive such approaches make me, I think that having to do the compiler job and memorize all those class names is a bit inconvenient. By compiling CSS to atomic classes, DSS allows me to write as many declarations as I want without penalizing the size of the final bundle. So I get to write the CSS I already know, e.g., margin-top: 25px and a compiler makes sure that it is compiled to atomic CSS and deduped if there are multiple occurrences of that declaration. It is a win-win situation. Ah, and you also get deterministic style resolution. 🕶 Why did you develop DSS?# Mainly because I use CSS Modules at work and I am a bit frustrated about the fact that you can still write overly specific CSS selectors. If you import the CSS files in the wrong order, you can easily screw up your application (👋 cascade). In addition to that with atomic CSS your application bundle size grows logarithmically, i.e., at some point, you can keep adding CSS, but the file size of your CSS bundle won't change (increase). In the end, I wanted to bring some of the good ideas from CSS-in-JS to static CSS land (and make Alex Russell happy). So my advice for folks who want to do the CSS-in-JS thing is to find a system that compiles the rulesets out and bottoms out at class set/unset. - Alex Russell (@slightlylate) - August 3, 2018 What next?# I would love to add source maps support for better debuggability in development, add automatic shorthand properties unwrapping and abstract the atomification library so that it can be used at runtime too, you know for dynamic styles. But most importantly it would be amazing if people could try it out and provide feedback! What does the future look like for DSS and web development in general? Can you see any particular trends?# I don't know what is the future of DSS like since the project is still at the validation stage. The principles behind it have been proven to be reliable by other similar solutions (e.g., CSS Blocks) therefore the future of it might depend on my marketing skills and the ability to make other people aware of its existence. :) For what concerns web development I think that the future is about building a smaller, simpler and more robust set of APIs and primitives on top of the DOM that will act as the normalize.css for the Web Platform. React Native is doing this for native platforms and React Native for Web is a first attempt to build a web framework to build better web applications. For what I know we could even go back to DreamWeaver though. We will also see a mix of web technologies and native code thanks to WebAssembly - we already do that at work. What advice would you give to programmers getting into web development?# The beginning is probably the best and more exciting part of a programmer career. At this stage, you probably don't or can't have strong opinions and are less likely to scope creep which is a great thing. Don't let the lack of experience or knowledge intimidate you, roll with it, get things done, break things and learn as you go. While you do that, also review your code and "try harder" constantly. Who should I interview next?# Nicolas Gallagher about React Native for Web, @electrobabe and @evatrostlos because they are starting Women && Code in Vienna. Any last remarks?# If you use React and any className helper I made an awesome Babel plugin for you: babel-plugin-classnames. I also developed a little tool to check the file size of your CSS bundle and what it would be like if styles were compiled to atomic CSS classes. Check out atomic-css-stats. Conclusion# Thanks for the interview, Giuseppe! I can see DSS solves a key pain point of CSS and I hope developers find it. You can learn more at DSS homepage. Check the project also on GitHub and play with DSS online.

SurviveJS - Fall Events

Summer is over and I've made travel plans for the Fall. More details below. WebExpo - 21-23.09.2018# WebExpo 2018 I went to WebExpo last year for the first time. It's one of my favorite crossover conferences. It spans from development to business to design. They have the same spacious venue this year and it's always nice to visit Prague. Learn more about WebExpo at their site. If you want to join me, use the code juho10 to get 10% off when buying a ticket. WebCamp Zagreb - 04-06.10.2018# I will visit Zagreb for the first time in October when I go to WebCamp Zagreb. The conference is aimed for both designers and developers so it should be highly interesting for me. I'll discuss static sites and their potential in my talk. In addition I'll give a workshop about webpack. GraphQL Finland - 18-19.10.2018# GraphQL Finland 2018 To continue on the success of React Finland, we'll organize the first GraphQL event of the Nordics in the form of GraphQL Finland. The lineup looks promising and it's a great way for you to get into the topic and see how GraphQL is transforming the backend. You can get 10% off from the price if you use the code juho10. ReactiveConf - 29-31.10.2018# I'll visit ReactiveConf in Prague to see what they are up to this year. The lineup has many familiar names and a few new ones. It's a scouting trip for me as well as I want to make sure React Finland 2019 will be at least as good as the first one. JSConf Armenia - 03.11.2018# I'll visit JSConf Armenia at the beginning of November. It's my first trip to the country. In my talk, I'll examine the past, present, and the future of React. Conclusion# I am a organizer at React Vienna so that's one way to spot me. Let me know if you are coming and I'll give you a tour.

Uppy - Painless Uploads for JavaScript - Interview with Artur Paikin

Let's say you are building a CMS or a blog with an admin interface. It won't take long until you want to upload files to your service. There are standards for this, but eventually, you want to do something more complicated. That's where solutions like Uppy by Artur Paikin and the team come in. Can you tell a bit about yourself?# Transloadit on uppy.io, an open source JS file uploader. I’m passionate about DIY home automation, traveling, indie web (RSS and standalone blogs forever!), alternative living spaces (camper vans, cabins, boats) and text adventure games (80 Days, Sorcery! and Secret of the Monkey Island in particular). How would you describe Uppy to someone who has never heard of it?# Uppy is a simple file uploading widget/library for the browser. It’s modular, easy to use and lets you worry about more significant problems than building a file uploader. I sometimes get asked why the world even needs anything beyond <input type="file">, why bring JS into this. The truth is: for many cases <input type="file"> is fine. Sometimes, however, you’d like to add a nice drag and drop surface with file previews and upload progress. Webcam support might be excellent to have. Picking files from your Dropbox without downloading to your mobile phone first can save a lot of bandwidth and battery life. For large files, resumability is also essential: like surviving wifi hiccups or accidental navigate aways, and not having to start the upload from scratch. All those things can significantly improve the user experience when uploading is a central aspect of your web/app. And Uppy can be deployed with nothing but a JS tag, using an existing <form> for fallback, and your Apache/Nginx server. Uploading files How does Uppy work?# Add Uppy to your website or app: include simple <script> and <style> tags with a CDN batteries-included bundle, or pick and choose exactly the plugins you need from npm (and build yourself with, e.g., Webpack). Your users select files by drag and dropping, taking pictures with a camera, pasting any URL, picking from Instagram, Google Drive, etc. Uppy displays file previews, lets you rename or edit metadata. Finally, files are uploaded to a backend of your choosing: Apache, Nginx, IIS, tus, Node, Rails, direct S3, etc. Uppy is processing-backend-aware. So you can optionally do video encoding, image watermarking, face detection, etc., and report the progress back to the user. Since this last step requires heavy-duty servers, you’ll likely be paying some fee if you want this. Someone needs to write the backend glue and scale the servers whether it is Transloadit or Linode/Hetzner/EC2/and your dev team. // please replace `v0.26.0` with the latest version <link href="https://transloadit.edgly.net/releases/uppy/v0.26.0/dist/uppy.min.css" rel="stylesheet" /> <div id="choose-files"> <form action="/upload"> <input type="file" /> </form> </div> <script src="https://transloadit.edgly.net/releases/uppy/v0.26.0/dist/uppy.min.js"></script> <script> const uppy = Uppy.Core(); uppy.use(Uppy.Dashboard, { target: "#choose-files", inline: true, replaceTargetContent: true, }); uppy.use(Uppy.Webcam, { target: Uppy.Dashboard }); uppy.use(Uppy.XHRUpload, { endpoint: "https://mywebsite.com/receive-file.php", }); </script> Check out the live version on CodePen and more examples. Everything is a plugin in Uppy. Start simple, add just what you need (or don’t): uploaders (”regular“, xhr, s3, tus), Redux integration, React components, progress bar, extracting values from the <form> element on a page. You name it — we’ve got a plugin for it (if not, send a PR or publish your own, we’ll happily link to it. ;-) const uppy = Uppy({ restrictions: { maxNumberOfFiles: 3, minNumberOfFiles: 1, allowedFileTypes: [ "image/*" /* Mimetype */, , "video/*" /* Mimetype */, ], }, }); uppy.use(Form, { target: "#my-form", getMetaFromForm: true, addResultToForm: true, }); uppy.use(ReduxDevTools); uppy.use(AwsS3, { limit: 2, timeout: ms("1 minute"), serverUrl: "https://uppy-server.myapp.com/", }); How does Uppy differ from other solutions?# Commercial offerings cost money and make it hard to switch. Uppy uses open standards and plugins can be written to interface with anything. We do offer a “hosted” option, but you can pack your things and move to your platform. Besides commercial, here are a few great open source alternatives that come to mind: Fine Uploader is a significant source of inspiration for us. It’s been around for a while and is a robust library with tons of features: drag-drop, file previews, validation, uploads to S3, etc. Dropzone.js — nice UI, a lot of configuration options. react-dropzone — similar to Dropzone, but React-specific. How Uppy is different: Unlike most of the solutions above, Uppy has remote provider support (remote URLs, Instagram, Dropbox, etc.). Other services that do offer it are usually commercial, not open source and sometimes hard to customize. Webcam support. We took a different approach with the UI that we feel could be a good fit for existing websites and applications. Support as mentioned earlier for resumable file uploads via tus protocol and Golden Retriever plugin that recovers uploads after browser crashes allow for a robust uploading experience. Uppy is aware of encoding backends, so you can quickly add file processing, such as video transcoding or image watermarking, when needed. Uppy supports React (or any other framework) and time traveling but does not require it. There are community offered Vue.js and Angular bindings, and we’re aiming to make those first-class citizens too. You can supply your plugins for encryption, modification, meta editing, etc. Dropping files Why did you develop Uppy?# When I started working with Transloadit, we wanted to build a new file uploader for their file processing and encoding services. They already had a jQuery SDK and were looking to modernize it. Then, after some initial prototypes and thinking, we’ve decided that instead of working on a proprietary SDK, we would build an open solution for everybody to use and hack. Transloadit support became an optional plugin. I wanted a file uploader that I could use with my static blog generator to import photos from, say, Instagram and remote URLs. I wanted it to be an open source standalone solution, so Uppy fits well. What next?# We are preparing Uppy for a 1.0 release: stabilizing APIs, polishing docs. We’ve just completed conversion to a Lerna repo — meaning that all plugins are published as individual npm packages, but kept in a single GitHub repo — this way it’s easier to add new features in one PR, maintain issues and build tools. Besides that, we are working on some basic React Native support, we’d like to release an Uppy WordPress plugin, add image cropping, and more options for file restrictions — our to-do list is massive, but contributors help a lot! What does the future look like for Uppy and web development in general? Can you see any particular trends?# More developers and tools are becoming aware of bloat, working on reducing JS and CSS bundle sizes. React has recently gotten much more lightweight (Uppy uses excellent Preact internally because it’s even lighter), Svelte goes further and turns your templates into small, framework-less vanilla JS, which is very interesting. I try to utilize bundlephobia, import-cost and size-limit. Making things less complex for developers is a recent welcoming trend. Browserify was relatively simple from the beginning, and Webpack has simplified things by a lot lately (though I still spend a good chunk of my dev time configuring it, I do understand it’s a price of flexibility). I enjoyed Parcel, I think I’d recommend it for new devs, because it’s just magical to run parcel index.html and see you app load in the browser with all the CSS/PostCSS/JS/Babel “just working”. Tools like create-react-app and vue-cli (which you can even optionally configure without “ejecting”) are excellent solutions to the “configuration fatigue” as well. I also really dig Standard JS — I use it and stay productive, instead of arguing about code styles. I think we’ll continue to see more PWAs and light websites replacing traditional mobile apps, where that makes sense. Accessibility issues are being brought up and addressed by more developers and framework authors. React docs now have a neat section on this topic. I’m betting on this work to continue, because, as Sara Soueidan said: “The web is accessible by default. You should try not to break it”. I hope that regained interest in indie web, publishing to your domain first, RSS and decentralized internet (different domains, but also Dat, Beaker Browser and IPFS will stick and more people will see the benefit of not trusting all your digital life to a single third-party service. The web was all about the democratization of people and knowledge, and I feel these (and similar) technologies could help us keep those ideas alive. What advice would you give to programmers getting into web development?# Reading the previous SurviveJS interviews, this seems to come up a lot in our community: don’t get overwhelmed by new frameworks, libs, choices and build tools. I know it’s hard — I see a new CSS-in-JS solution I want to try every day on Twitter. :-) In the long run, it’s more important to continue developing and publishing your work, than getting stuck mastering new tools. When you’d like to try a new idea, try it the browser with a plain index.html or on CodePen. Start simple and add things on top if needed. Recently I’ve been seeing more developers who stick to plain CSS and JavaScript, without all the useful bloat we’ve added to it, and can’t help but long for that a little at times. So this is advice offered to me as much as the next person. :-) Who should I interview next?# Devine Lu Linvega who lives and travels on a sailboat, while developing open source tools and games— Rotonde, a decentralized social network, Dotgrid a minimalist illustration tool featuring various line-types, line-styles and vector effects and Paradise, an interactive fiction novel game/playground. Renée Kooi, my colleague who, besides Uppy, is working on some really cool stuff: tinyify (a Browserify plugin that runs various optimizations on JS bundles), common-shakeify (Browserify tree shaking), bankai (a dev server/JS+CSS compiler) and üWave, the federated collaborative listening platform. Tara Vancil and Paul Frazee — they are doing rad cool stuff with P2P web and Beaker Browser. Nolan Lawson, besides working on improving the Microsoft Edge browser, is a big proponent of progressive web applications, and has created Pokedex Pokemon database PWA with offline support and Pinafore a PWA for Mastodon, a federated alternative to Twitter, using Svelte library that I mentioned above. Any last remarks?# I’d like to thank our team! It’s a joy to work on a fantastic open source project with unbelievably talented people: Kevin and Tim, the founders of Transloadit, who inspired and sponsored the Uppy project: Harry who we worked with at the beginning of Uppy. Renée, who — fun fact — I met via GitHub PRs and discussions, and who became an invaluable addition to the team. Ife who maintains Uppy Companion, a server component that enables picking files from Instagram, Google Drive or remote URLs. Alex who helps design our UI. Marius who’s made our tech so robust (as the driving force behind tus.io. Conclusion# Thanks for the interview, Artur! It's great to see solutions that go beyond the standards. To learn more about Uppy, head to their homepage, check Uppy on GitHub as well.

GraphQL Finland - Learn GraphQL Up North - Interview with Mikhail Novikov

Following the success of React Finland, we decided to organize another event this year. GraphQL Finland has a different scope, and it's going to be the first Nordic GraphQL focused conference. To tell you more about the conference, I am interviewing Mikhail Novikov, one of the main organizers. Can you tell a bit about yourself?# freiksenet. I'm a co-founder of Reindex, the first GraphQL startup in the world and have been doing GraphQL consulting, training and open source work for the last couple of years. I've developed Apollo Launchpad, schema stitching in graphql-tools and worked with companies like Apollo and Expo. I'm also helping organize GraphQL Finland and React Finland here in Helsinki. How would you describe GraphQL Finland to someone who has never heard of it?# GraphQL Finland is a community-organized non-profit conference about GraphQL. It's going to happen on 18th and 19th of October at Paasitorni Congress Hall in Helsinki, with workshops on the first day and talks on the second day. What does GraphQL Finland offer?# We have a great line-up of speakers, with keynotes done by people like Dan Schafer, one of the creators of GraphQL, and Adam Miskiewitz, who helped moved Airbnb to GraphQL. The workshops are hosted by well-known members of the community, like Marc-Andre Giroux, Sara Vieira, and Nik Graf. It's an excellent opportunity to learn about GraphQL, if you are new to it, or to improve your knowledge of it if you are already familiar. The event is also a great chance to meet other members of GraphQL community. How does GraphQL Finland differ from other events?# It's a community organized conference. We are not affiliated with any GraphQL vendors, and we want to build an event that's inclusive to everyone interested in GraphQL. We also value diversity much and are offering a free "Let's Learn GraphQL" workshop to members of underrepresented groups. This ticket will also include a conference ticket. Apply for a diversity ticket. The trip is not included. Why did you decide to arrange GraphQL Finland?# I don't feel there are enough events focused on GraphQL. It's a rapidly growing technology, but we are very constrained by the fact that the community is often sharing the space with frontend and React at the conferences. There should be a platform for people to talk about GraphQL and we wanted to make that platform as inclusive and community-oriented as possible. That's one of the main reasons why GraphQL Finland exists. What next?# Assuming the GraphQL Finland goes excellently, we'll organize React Finland in 2019 next. What does the future look like for GraphQL Finland?# I want it to become a yearly event. This, of course, requires more validation that the community is big enough in Finland and the international community wants to attend events here. In any case, I'll work on growing the community. What advice would you give to programmers getting into web development?# Always bet on React :P Who should I interview next?# Adam Miskiewitz from Airbnb, about how they introduced GraphQL to their vast codebase. Any last remarks?# Come to GraphQL Finland! It's going to be great, Finland is beautiful in October, and the conference is going to be genuinely fantastic. I'm looking forward to seeing everyone interested in GraphQL at the event. Conclusion# Thanks for the interview Mikhail! I think we'll see in GraphQL Finland. Check also the conference blog, follow the Twitter account, and buy tickets.

Progressive Web Apps - The Why and How - Interview with Maciej Caputa

Even though the web started from content, it has transformed into an application platform. Approaches like Progressive Web Apps are a clear sign of this. Maciej Caputa will explain the topic in more detail in this interview. Can you tell a bit about yourself?# In Netguru, we change our main project every 6-9 months. This way of working gives me ample opportunities to gain experience quickly, code in different environments or tech stacks, and work with different users and clients’ needs. I focus mostly on JS and React (and all their quirks and quarks), but I always keep in mind the importance of providing excellent user experience with modern and intuitive design. I think that the critical features of making the user happy are performance, interactions, and smooth animations. I value inclusiveness, openness, and equality – and that’s probably the main reason, why the philosophy behind Progressive Web Apps is so important to me. How would you describe Progressive Web Apps to someone who has never heard of them?# A PWA is standard web application build with a focus on performance, responsiveness and native experience. Thanks to the manifest file and service workers, you can add the app to your home screen with one click, and then it behaves like a regular mobile app. An app can offer a full-screen, native experience even when the user is offline. PWAs are capable of downloading data in the background, caching communication with the server, and support push notifications. At the same time, PWAs are lightweight, always up-to-date, and, in many cases, more performant than regular mobile and web applications. In other words, a PWA looks like a native app, but under the hood, it is a regular website. Thanks to that you can quickly provide a native experience on mobile, which will undoubtedly increase user engagement. How do Progressive Web Apps work?# From the technical point of view, a PWA is a web application built with HTML and JavaScript, to which we add progressive features (mostly with manifest and service worker files). The implementation process of a PWA is no different from that of a regular web app. A PWA lives on top of fullscreen browser view (without the browser’s address bar and navigation), and can provide the same or even better experience than a native app. It can perform background tasks, work offline, and support push notifications. PWA strongly relies on caching – it is capable of caching web assets and requests on your device. Moreover, it can save user-generated data when the user is offline and push it to a server when the user becomes online again. How do Progressive Web Apps differ from other solutions?# A Progressive Web App does not require downloading big packages of data to install; you don’t need to publish in the App Store or prompt the user for updates. At the same time, PWAs provide a native experience. A PWA can share the codebase with your regular website, but it will engage users more than a traditional mobile/responsive website. It seems that users prefer to use applications instead of mobile sites on smartphones, but at the same time, they are not eager to install apps from the stores. PWAs solve that problem. What other solutions are there# If we consider a PWA as a replacement for a mobile app, then building a native app would be the other solution. We can do it in a standard native way (separate apps for iOS and Android) or use some tools that allow us to share the same codebase, but still get a native app (like React Native, Xamarin). The other option would be to create a hybrid app – an app build in HTML and JavaScript, working on top of a browser engine, such as PhoneGap, ionic, etc. All the above solutions have different pros and cons, but what they have in common is that they need to be distributed via the App Store/Google Play Store, often requiring the user to download a large installation package. Building apps this way, in most cases, means that you can’t share the codebase between mobile and web, and you will have to build two or even three separate apps (and also maintain them separately). It also requires paid developer accounts and using platform-related tools such as Xcode or Android Studio. The other solution would be to use regular responsive/mobile websites, but in most cases, users prefer to use apps on mobile instead of sites in a browser. Another buzzword promoted some time ago by Google are AMPs (Accelerated Mobile Pages). Their goal is similar to PWAs: to build performant and fast loading websites that provide a great experience on mobile. However, the main difference is that AMPs are focused on web pages that contain content and sell products, not on applications that offer features. On top of that, to build an AMP, you need to use a particular tool, which has some limitations. What does the future look like for Progressive Web Apps and web development in general? Can you see any particular trends?# Looking at many success stories of PWA from big companies like Financial Times, Twitter, Tinder, Aliexpress and more, I think that the popularity of Progressive Web Apps will be increasing dramatically. PWAs can significantly improve user experience and engagement, especially on mobile devices with slow or even no connection. The support for PWA features is still growing. Lately, Apple introduced the support for service workers and manifest in their new iOS release. That’s a clear indication that something serious is going on. That said, the support is not perfect yet and still requires some tweaks to provide a fully native experience, for example, the splash screen is not generated from the manifest, as you can see in my article. Some say that 2018 is the year of PWAs. I think that many people still aren’t aware of the possibilities and benefits that come with PWA. It’s our job as developers to choose the best tech stack and consult on the tools used for the product. I imagine that in the future we will only build one, progressive, app, and it will work natively regardless of a device it is run on. What advice would you give to programmers getting into web development?# Regardless of the seniority level, I would advise programmers to always keep in mind two simple rules: Always think about users’ needs and expectations. Remember that the solutions and tools you use as a frontend developer have a great impact on the intuitiveness and usability of the product you’re building. Always use the best tool for the job, regardless of what is the hype at the moment. As soon as you start behaving like a sensible programmer – responsible not only for the code you write but also for the features you deliver – you will be able to add real value to the web world. Who should I interview next?# I am a fan of React, but I’m watching closely as the new star begins to shine brightly – and I mean Vue. I would suggest interviewing Michał Sajnóg (who has just joined Vue Core Team) about it! Vue and PWA are a match made in heaven. 💪 🌈 🦄 Any last remarks?# PWAs aren’t only a replacement of mobile apps. They also stand for the openness and inclusiveness of the web because they work independently from the device, platform, and connection, providing the best possible user experience on every device. A lot of people still need to do with a poor Internet connection or slow devices. I think that PWAs will be a huge factor that would make the web more available to them, as they are so light on resources. Conclusion# Thanks for the interview, Maciej! I think PWAs are a great compromise and a particularly good idea to adopt for web developers. Browsers and devices are starting to get there, and although you cannot always match a native application, PWAs are the perfect fit for specific purposes.

Proppy - Functional props composition for components - Interview with Fahad Ibnay Heylaal

You might be familiar with packages like Recompose that make it easier to compose components. Recompose is React-specific solution. What if there was something broader available? Proppy by Fahad Ibnay Heylaal is such a solution. I interviewed Fahad earlier about FrintJS, a framework designed to combine React with RxJS. Can you tell a bit about yourself?# How would you describe Proppy to someone who has never heard of it?# Proppy is a tiny 1.5kB library, which enables you to compose your props for UI components functionally. That's all that it does, nothing more. It comes with additional integration packages for connecting it to other rendering libraries like React.js, Vue.js, and Preact. As well as packages for having interoperability with Redux and RxJS. How does Proppy work?# The goal of this project is to lift the state of your UI components one level up and keep your components layer always stateless. Data can come from various sources (like Redux, RxJS, REST API, etc.) and can be composed together, and that composition can later be connected to your stateless component. Proppy flow All the logic and behavior of your component can be expressed as props using the core proppy package. Afterward, that composition can be connected to your rendering library of choice using the higher-order components that Proppy's integration packages provide. How does Proppy differ from other solutions?# It has been only a few days since the release, and I am already getting a lot of similar questions like how it differs from Redux and Recompose. I will explain it briefly below: Recompose# Recompose has been the source of inspiration for this project. I liked how the package was composing props by just using React components in a very functional manner. The primary difference between Proppy and Recompose is: Proppy is rendering library agnostic. Therefore, not tied to React. Other key differences include: Proppy gives you access to application-wide dependencies anywhere in Components tree. Proppy does not create one more Component in the tree per function's usage in the composition. Redux# Redux, on the other hand, is used for application-wide state management. Not necessarily for individual components. With the concept of providers in Proppy, you can consider your Redux store to be one of them. Other providers in your application could be configuration, theme name, etc. Proppy itself is unopinionated about what library you use for your application's state management. There is a proppy-redux package for convenience in case you already are using Redux. Why did you develop Proppy?# To be strictly honest, I submitted a talk proposal at AmsterdamJS JSNation conference, which was about embracing RxJS in React and Vue.js. Once my talk was accepted, I had to experiment and come up with some solution to demonstrate in my presentation. A first feature that I had in my mind was to only support a "stream in" and "stream out" flow of props expressed via Observables which is done in proppy-rx package. That experimentation ultimately ended up becoming Proppy. What next?# There is a sudden increase in interest in "Micro Frontend Architecture" lately. I think I would like to explore this a bit. I have done something similar in FrintJS before, but I think there is room for a more unopinionated and smaller library that others would be more interested in adopting. Maybe I can utilize Proppy for this (since it already works with React, Vue, and Preact), and build something that supports the broader ecosystem. What does the future look like for Proppy and web development in general? Can you see any particular trends?# Proppy is still only a few weeks old. I will be writing a lot more documentation and share more examples so others can make better decisions regarding adopting this library in their projects. But it's nice to see other developers mentioning via GitHub issues that they have already chosen it in their production applications. Regarding general trends, I suspect a large group of developers will embrace a new transpile-to-javascript language soon. A lot of us used CoffeeScript before, then Babel came up, and then TypeScript. I think the next big thing is going to be ReasonML, and I am excited about how the community responds to it over time. What advice would you give to programmers getting into web development?# Don't get overwhelmed or distracted by every new framework or library (even Proppy :D) that pops up every other day. Pick something, and stick to it until you feel at home. Knowing your tools inside out pays off more in the long run. Learn to understand when you want a new tool, and when you need one :) Who should I interview next?# I am a big fan of Ives van Hoorne and everything that he is doing with CodeSandbox. I would love to know what more exciting things he is working on at the moment. I interviewed Ives about CodeSandbox earlier. Any last remarks?# I would like to thank you very much for arranging this interview, and help spread the word for Proppy! For the readers, I would highly recommend you to go through the documentation and playground if this interview has sparked interest in you at all :) Conclusion# Thanks for the interview Fahad! It's great how you ran with the idea and pushed it to a polished state. To learn more about Proppy, consider the following resources: Docs API Playground GitHub

SurviveJS - Summer Events

Summer is a great time to travel and see new places. So far I've scheduled three events for the Summer although more may appear depending on the public interest. GrazJS - 06-07.06.2018# The first event where I'll take part this Summer is GrazJS meetup on 6th of June. We'll do workshops on 7th so if you are interested in either ReasonML, design systems, or webpack, this is your chance to get a deep dive into the topic. JIMDO Dev Talks - 28.06.2018# I will be going to Hamburg on 28th of June to participate in JIMDO Dev Talks event as a guest speaker. JSCamp Barcelona - 18-20.07.2018# I will go to JSCamp Barcelona in July. You can get -10% off if you use my promocode. Conclusion# The Summer is shaping up nicely and more events might come up. It's likely we'll organize local events especially in the context of React Vienna.

Fastpack - Pack JavaScript code fast and easy - Interview with Oleksiy Golovko

Tools like browserify and webpack popularized the idea of bundling. The idea is to transform your web application into a format that can be distributed to browsers. Bundlers operate on module level and can combine the assets in various ways. Fastpack by Oleksiy Golovko is a new bundler that emphasizes performance. Read on to learn how it achieves this goal. Can you tell a bit about yourself?# When not programming I read, play tennis/squash/badminton and spend time with my three kids and the beautiful wife. Also, I am passionate about tasting good beers – you cannot resist it here in the Czech Republic! How would you describe Fastpack to someone who has never heard of it?# Fastpack is a Javascript bundler. It can bundle other assets as well using webpack's loaders. How does Fastpack work?# Fastpack is written in OCaml and compiled into a binary executable. To start using it, you need to install it from npm: npm install fpack. If everything went fine, you should be able to run node_modules/.bin/fpack --help. We are trying to maintain the minimum set of required configuration parameters. The only way to pass those is to submit them as command line arguments: entry points, output configuration, resolver settings and preprocessors. Under the hood it works just like most bundlers do: Resolve dependencies. Collect modules. Optionally apply transformations. Produce the output. Fastpack uses the Flow parser for JavaScript, so JSX and flow typings supported out if the box. Consider the examples below to see how Fastpack can be used: Build development mode bundle# $ fpack --dev -o dist ./lib/index.js Apply transformations to all JavaScript files in the lib/# $ fpack --preprocess='^lib/.+js$' --dev -o dist ./lib/index.js Use babel-loader for transformations# $ fpack ./lib/index.js \ --dev \ -o dist \ --preprocess='^lib/.+js$:babel-loader?filename=.babelrc' You can find more examples and the documentation on the fastpack.io site. How does Fastpack differ from other solutions?# Well, it is faster :) Speaking seriously, we are aiming for three primary goals: Make bundling phase as fast as possible, ideally, disappearing. Keep configuration approachable and straightforward. Keep the Fastpack's source code clean and logical, so that people can collaborate on it. Naturally, we keep in mind other success parameters, like bundle size, which is very important too. But that's the second tier goal for now. Why did you develop Fastpack?# There were several reasons: First, I wanted to understand OCaml & ReasonML better. What do people do with it? What is the workflow? What are the hiccups? TodoMVC or even "Real World Example" didn't seem to be exciting use cases, so I decided to try something out of the compiler's side of things. The second reason was the amount of the JavaScript code we had in my company. Our bundles are quite big and took a lot of time to rebuild. Of course, I could have configured webpack better, but I was never too successful in it. What next?# Fastpack is young, and there is a long road behind us until it matures enough. Right now, we are considering several directions, which are: Further speed improvements - there are still optimizations which are possible on this front. Improving the bundle size - more aggressive tree shaking using the control flow graph and elements of the symbolic computation. Getting the feedback, polishing existing functionality, fixing bugs - e.g., the usual things. Overall, I am passionate about the development tools and would be happy to contribute to other related projects as well. What does the future look like for Fastpack and web development in general? Can you see any particular trends?# As far as I can say, we would still need bundler in short/medium term. Hence, Fastpack may have its niche and its users. On the other hand, the HTTP/2 and supporting ECMAScript modules in browsers will likely eliminate a lot of bundler use cases in a long run. The other (unrelated to Fastpack, or partially related because of the language) trend going on right now is the ReasonML. I think this is the future of the web development alongside the Elm and PureScript. Of course, I am biased, so take it critically, but writing, debugging & maintaining the OCaml/ReasonML code is so much more comfortable, safer & more pleasant than any other dynamically-typed language I have experienced before. What advice would you give to programmers getting into web development?# I am not feeling in a position to be giving advice, but I think something trivial like "learn, practice and communicate" should always work. And yes, learn OCaml/ReasonML :) Who should I interview next?# I would love to see an interview with Andrey Popp (@andreypopp), Patrick Stapfer (@ryyppy), Nik Graf (@nikgraf) and Vladimir Kurchatkin (@vkurchatkin) - really loved his talk on ReasonConf. Any last remarks?# Thank you for the interview! Conclusion# Thanks for the interview Oleksiy! Fastpack looks promising. I like the approach and now I'm tempted to try out the tool in a few projects of mine. Learn more at Fastpack documentation or Fastpack GitHub.

lint-staged - Run Linters on git Staged Files - Interview with Andrey Okonetchnikov

Although linting a project is a good technique to adopt, it comes with a cost as you have to wait for the linter to complete its work. Andrey Okonetchnikov figured out a way to solve the problem. Can you tell a bit about yourself?# How would you describe lint-staged to someone who has never heard of it?# lint-staged is a simple to use tool to enforce code quality in teams. Why did you develop lint-staged?# I care a lot about code readability and maintainability. To keep my code in good shape I use linters and other code analysis tools that help me catch simple bugs earlier. While working with different teams I noticed that: Not everyone cares so much about code quality. Not everyone has proper linter setup in their editors. As a result, even though some people were committed to using linters, the overall quality of the code was far from great, and it wasn’t improving. This kept me thinking about how to enhance the quality of code in teams without becoming a pain. The idea behind of lint-staged is to make the setup process for all developers on the team as simple as possible. Instead of writing instructions on “how to setup XXX linter in YYY code editor” and hoping everyone will follow it, you can commit lint-staged configuration to the repository and next time anyone pulls the code from the remote server they will have the proper setup up and running. How does lint-staged work?# Lint-staged is supposed to be used with git pre-commit hook. Pre-commit hook prevents developers from submitting their work to the repository if it’s not up to the project’s standard. Unfortunately, there are two problems with git hooks: They are hard to setup and manage across teams. The problem can be solved by using packages like husky or pre-commit. Running linters on pre-commit can be time-consuming since it will lint the whole repository on every commit. Lint-staged solves the second problem by only running linters on files that are “staged” for the commit. If you follow good commit practices, you might have edited ten files, but you’d like only two of them in the single commit. In git, you select those two files by “staging” them using git add … command. When you then run git commit, lint-staged will only run linters on those two files keeping your work in progress unchecked! By doing so, lint-staged reduces the time needed to run linters but also makes sure that only relevant to commit changes are linted. How does lint-staged differ from other solutions?# There are not so many alternatives as far as I know. pre-commit, which is written in Python, is the closest alternative and it has similar goals so you might want to take a look at it, too. What next?# I’m fascinated by code analysis tools and would like to work on tools that are the intersection of design and technology: an automated way for better web typography or static analysis tools for design systems and components libraries maintainers. What does the future look like for lint-staged and web development in general? Can you see any particular trends?# I hope to crack the long-standing issue with partially committed files which I’ve been working on for more than a year now. So if you think you can help me, please feel free to join me! As for general trends: I think JavaScript trend will keep growing and this will lead to: Better code analysis tools that optimize JavaScript code and reduce runtime payload will appear. The search for more robust languages will continue. One of the candidates to become the type-safe JavaScript is ReasonML which makes me very excited. If you’re interested in this trend, you should attend ReasonConf which I’m co-organizing. What advice would you give to programmers getting into web development?# Don’t jump on the technology hype-trains—those come and go all the time. Instead, learn basics of the web platform: HTML, CSS, accessibility since they are the foundation of the open and accessible web. Take some time to learn basics of the graphic design, computer science, algorithms. Focus on creating and shipping products that are useful and user-friendly and not on the technology behind it. Conclusion# Thanks for the interview, Andrey! I use lint-staged actively and it even made it to the maintenance book. You can see how to set it up there. To learn more about the tool, check lint-staged on GitHub!

"SurviveJS - Webpack" book updated to webpack 4

Quite a bit has happened in the world of webpack. Most notably, the webpack reached version 4 recently. The purpose of this release of the book is to update it to support webpack 4 while preparing towards the next paper edition. Compared to the previous release this one is even lighter although it's more informative at places. Webpack became simpler and this meant the book could as well. Most notably CommonsChunkPlugin is gone now and the tool comes with better defaults. If you aren't interested in what has changed, skip straight to the book. Overview of the Situation# Somehow the December and the early part of the year just went by. Organizing React Finland took its toll and it's still an ongoing effort. On the plus side, doing this has taught me a lot so far. I've also begun to generate some business in Vienna although it hasn't been particularly easy. There's likely more to come and these things tend to get easier over time. After webpack 4 came out, I updated the book to it fairly quickly. I gave it some extra polish and now it's the time to release the improvements to the wider public. There's still some work left to be done and I can cover topics such as WebAssembly in future versions but overall it feels like the best version yet. Book Improvements - 2.3# I released a series of silent releases as before. The chapter structure is intact but the contents have changed significantly at places as webpack 4 allowed simplifications. The grammar has been improved as well. I've listed the main changes next: Eliminating Unused CSS chapter is before Autoprefixing now as this way it felt like it flows better. The Comparison of Build Tools appendix has been largely rewritten to reflect the current situation. The Source Maps chapter has been tuned to take webpack 4 into account leading to simplifications. The Bundle Splitting chapter has been rewritten to take advantage of webpack 4 syntax. Much simpler now. The Code Splitting chapter doesn't contain the old syntax anymore. It's better to refer to webpack documentation for that. The Getting Started chapter has been rewritten so it's easier for people to go through. The Loader Definitions chapter has been expanded to contain more ideas. The Loading Fonts chapter doesn't contain Font Awesome example anymore. Their documentation seems to cover usage well enough now. The change also simplified the remaining book. webpack.NamedModulesPlugin gets less attention now that webpack has mode and module ids are set in a better way without the user having to tune them. The Build Analysis chapter has been expanded to contain more ideas. In total 242 commits went to the book since the last public release. You can find the changes at GitHub. Remember especially the "Files changed" tab as it gives you a good overview of what's happening with the book. You can find the book below: “SurviveJS — Webpack” - Free online edition “SurviveJS — Webpack” - Leanpub edition (digital) A part of the income (around ~30%) goes to Tobias Koppers, the author of webpack. I support his work this way given mine builds on top of his. What Next?# Even though I have a list of improvements planned for the webpack book, it doesn't make sense to push it to paper until mini-css-extract-plugin and Babel 7 have reached stable status. mini-css-extract-plugin replaces extract-text-webpack-plugin for majority of users but it still requires more work. It will simplify CSS configuration somewhat. Most likely the next book release has to do with the maintenance book. There are those last bits of content that require work and the book needs structural editing as well. That said, it's already a useful one even its current state. Conclusion# I hope you enjoy the webpack 4 version of the book! Note that I'm active at the book Gitter channel if you want to bug me about webpack. You can also ask questions at my AmA. There will be webpack workshops in Munich early May 2018!

Webpack in Munich, May 2018

In addition to writing and consulting, I do occasional training. Most often it's around my webpack book and it supports different levels of sessions well. Workshops also improve the material so the work feeds back to the book and strengthens it. To make it easier to get to these events, we'll arrange a series of public workshops at Munich, Germany. Workshops with Tobias and Nikos# It is particularly fun to run these workshops when Tobias Koppers, the author of the tool, is around. If there's something I cannot answer fully, there's someone who has an exhaustive answer in mind. I've teamed up with Tobias for a session of workshops in Munich around early May. We'll have two levels of sessions - one for beginning and intermediate users of the tool and one for experts that have a specific problem in mind. In addition, a London-based developer, Nikos Katsikanis of QuantumJS, has promised to hold a workshop under the topic of Secrets of World Class Developer Teams. Between 8th and 10th of May in Munich# The three workshops are held between 8th and 10th of May in Munich. There's limited capacity (up to ten, five for the masterclass) to make sure we can provide the maximum amount of value for each participant. The prices range from 250 to 500 euros (VAT included) per workshop. The Workshops# There are going to be three workshops: two about webpack and one about developing teams. Webpack - The Good Parts - 9:00-12:00# In this high-level overview you'll learn how to configure Webpack. Even if you know it already, there might be some surprises in store as you gain insight to the tool you otherwise might miss. The topics covered include: Fundamental ideas of webpack Development techniques Build techniques Asset management Bundle/code splitting Build analysis Optimizing the build Preview the workshop slides online! Masterclass with Tobias - 13:00-16:00# Do you want to make most out of webpack or have a specific problem in mind to solve? Join us in this masterclass with Tobias where we will show you how to write loaders and plugins, discuss webpack internals in detail, and help you to solve your issues and speed up your build. Secrets of World Class Developer Teams - 13:00-17:00# In this seminar held by Nikos Katsikanis, you will get an overview of what it takes to build a successful software development team. You will examine patterns of behaviour, skills and traits. The seminar is highly interactive, and everyone is encouraged to share their knowledge in the break out sessions. Nikos is an experienced JavaScript trainer and consultant. He has worked for over ten years in the industry and enjoys the working dynamics of different organisations and has a wealth of experience to pass on. His clients have included the likes of British Gas, Dixons Carphone and RES. Conclusion# It will be interesting to see how this works out! In addition to these Munich sessions, there will be webpack sessions as follows: YGLF - Kiev, late May. AngularCamp - Barcelona, late July. React Dev Summit - online, late March, get -10% off with coupon JUHO, how original. Live stream is free!. In addition, it's likely I'll run workshops at Vienna later this year depending on demand and interest. In case you have any questions, leave a comment or contact me directly (see the footer). Tickets are available through Tito.

Experiences on Concat 2018

I participated Concat 2018 at Salzburg this year. I held a four-hour webpack workshop with Tobias Koppers, the author of the tool, and visited the event itself. Overall, it was a great one-day dual-track conference, and I don't regret going. The Workshop Day# Train at Salzburg We held the four-hour workshop from 13:00 to 17:00. It was based on my slide set, Webpack - The Good Parts. I spent a good chunk of the preceding week updating my content to support webpack 4. Having Tobias around was great as he was able to go into more in-depth detail where needed and I also learned a few new things. After the workshop and a brief break, we headed to the speaker's dinner. It was arranged in an Italian restaurant, de Cesare, and was high quality. I met several people including Rasmus Lerdorf, the creator of PHP. I understand the philosophy behind the language far better now and can appreciate the ecosystem for its achievements. The Conference Day# The Venue was modern The conference started with breakfast although to save some time I ate one at my hotel. It is good to note the venue itself was about twenty minutes from the center of Salzburg as it was held at a local technical university. The place itself was amazing and had plenty of space available. The only significant restriction was that you couldn't take alcohol inside, but I didn't mind. Daniel Clifford on Optimizing for Real World# Daniel on optimizing for real world Daniel Clifford had the honor of opening the conference. I consider the keynote of a conference as the talk that sets the tone of the entire day. Daniel's talk was somewhat technical and expertly given. It reminded me of Benedikt Meurer's talks at AgentConf. It was nice to see and understand how the V8 team has managed to improve the performance of the JavaScript engine. That said, the talk felt too technical as the first talk to me. My personal preference is to have an inspirational, high-level talk in the beginning. The talk itself was great, but I feel it could have fit better before lunch. Ivana McConnell on Exclusionary UX and How to Avoid It# Ivana on exclusionary UX User eXperience (UX) is a relevant topic to web developers as when you design a user interface and its functionalities; you also have to consider different types of users. The talk contained theory and quotes behind the topic while there were also examples. The talk didn't work that well timing-wise as it roughly 30 minutes instead of 20 minutes given to usual talks. A part of this had to do with technical difficulties that weren't due to the organizer. I feel the same message could have been gotten through by putting more emphasis on examples and then justifying the topic through that. The topic fits the conference, but I feel the execution could have been stronger. But I think most of the audience got the point. Cory-Ann Joseph on Poker Playing AIs# Cory-Ann on AI Cory-Ann Joseph's talk was built based on a narrative based on her career so far. The interesting fact is that she was a poker professional once. The central message of her talk was that entire industry can become disrupted fast by emerging technologies, such as Artificial Intelligence (AI). I would have loved to see more examples of how to combine AI with design towards the end. I believe there's a lot of potential in AI aided design where an AI helps a human designer to generate more concepts, faster. I don't think fully generative designs are feasible yet, but if we can push even a part of the work to a machine, we'll get better results, cheaper. Ola Gasidlo on How to Make Browsers Compatible with the Web# Ola on how to make browsers compatible with the web Ola Gasidlo's talk was on the second track, and by the time we arrived there, it was full. We had to stand, but it was still an excellent presentation. The talk discussed the history of the web and how we ended up with what we have right now. She also gave insight on how to get involved with its development. There wasn't anything particularly surprising in the talk to me, but it was still an excellent recap. It would have been interesting to see Ola's projections for the future of the web. Jenny Shen on Designing Across Cultures# Jenny on designing across cultures Jenny Shen's talk on the second track was the highlight of the conference for me. I would have made it the keynote as the topic was relevant and the execution was top class. She discussed the cultural differences you encounter in design through her broad experience. As it happens, the way you design has to take the culture into account as otherwise, you'll end up with a solution that doesn't work well. Different cultures have different expectations. Interestingly enough Jenny's talk complemented Ivana's one on exclusionary UX but a cultural level. It's not always about race or gender; it can be about culture as well. Houssein Djirdeh on Thinking in PRPL# Houssein on PRPL Houssein discussed the popular PRPL (Push, Render, Pre-cache, Lazy-loader) pattern made known by Google. Houssein's talk discussed various possibilities in detail, and it fit the conference well. It was right after lunch, so I was ready to focus on the talk and appreciate it fully. I skipped the next two slots as there was something else to do and continued on Varun's presentation about animation. Varun Vachhar on Mathematics of Animation# Varun on mathematics of animation Varun discussed the mathematics of animation through various examples. It brought back some memories from my university time. I wish they taught this way as the concepts become intuitive through animation and can be fun to implement this way even. Once you see the applications, you will pick up the ideas as well. Max Stoiber on Styled Components# Max on Styled Components After a coffee break, Max gave an entertaining talk that explained why his Styled Components approach works the way it does. It was an excellent introduction to the topic although I felt it could have gone technically deeper. I would have approached it by live coding a naïve version on stage. It's essential as few people know the related APIs and find them magical. The magic goes away when you see how it all goes together under the hood. It's a minor gripe, though, and this way the talk was to the point without veering too much from its topic. Lightning Talks# Andrey wants you to come to React Vienna! The conference had roughly forty minutes for lightning talks. The audience could vote which ones should go on stage and then the presenters did their best. The topics were varied and included ideas related to accessibility, productivity, CSS techniques, and Git. They were entertaining overall, and I wish a couple had been longer as there was potential for something more in-depth. Short talks tend to be more challenging to deliver than long ones. Sara Soueidan on SVG# Sara on SVG The conference ended with Sara Soueidan's talk about SVG (Scalable Vector Graphics) and its capabilities. I knew you could do a lot with SVG, but I didn't think it's possible to do this much. She demonstrated how to implement its generative graphics and texturing capabilities for example. I feel the talk was a little detail-heavy for the last presentation of a day and I would have been able to appreciate it more had it been earlier. I would have probably swapped her talk with Max's or had an inspirational, lighter talk at the end. Afterparty# DnB at afterparty After the last talk, there was still something to eat and drink outside. From there people headed to the afterparty held in Studio 68 and MARK Salzburg. Although the location was remote and little tricky to reach, it was a good choice. I saw both Rasmus and Tobias singing karaoke and even made a few friends, so it wasn't all that bad. The Good, The Bad, The Ugly# The DJ was the only person dancing I was positively surprised by Concat. The conference went without any significant hiccups. The content was solid although I might have gone with a different order. The food was excellent, and there was plenty of it. I have never seen a conference with this tasty food yet, and I have seen a lot of events! Even though the conference venue was slightly remote, I can see why the organizers chose it. The space worked well, and there was enough of it. Compared to the other Austrian conferences (AgentConf, ScriptConf) I have seen this year, I would say Concat had the best space and food. AgentConf was on par when it came to the talks, and ScriptConf was solid as well. Conclusion# The organizers of Concat Concat was a great find for me, and I hope more people will discover it. Salzburg itself is worth visiting, and a technical conference like this makes a possible trip even better. I wish I can participate again next year! You can find more of my Concat 2018 photos at Flickr.

Verdaccio - A lightweight npm proxy registry - Interview with Juan Picado

If you develop JavaScript applications, you most likely use npm, the most famous package manager available for JavaScript. At the time of writing, it hosts over 600 thousand packages, and the amount keeps rapidly increasing year by year. That said, npm isn't perfect. What if it goes offline for a while or you want to use private packages at your company? npm provides several commercial options, but today we'll discuss an open source one, Verdaccio by Juan Picado. Can you tell a bit about yourself?# Currently, I work in Austria as a Software Engineer at Mobfox. I love meetups, books, sports, software conferences and I travel a lot. How would you describe Verdaccio to someone who has never heard of it?# Verdaccio is a lightweight private proxy registry with an entirely optional configuration that allows you to host private Node.js packages and compatible with all client package managers such npm, yarn or pnpm. How does Verdaccio work?# Verdaccio emulates the main npmjs registry, its internals can be broken down into: Web Interface: A simple interface to navigate your private packages. Private Storage: The main feature is hosting private packages. For instance, you might override packages from public registries. The default storage is file system based. Uplinks: References to other registries. Verdaccio can handle as many registries as you want to link. By default, it links to npm. Proxy and Cache: The most important part - it allows to selectively cache or route packages to other registries based on a match pattern. Plugin Support: For authentication, web middleware and soon storage. Authentication: By default, Verdaccio uses htpasswd basic authentication, but there are plugins for LDAP, Gitlab, MongoDB, Bitbucket and more. Packages Access: Restrict access to packages by peer groups, users or both, based on the auth plugin you want to use. How does Verdaccio differ from other solutions?# Other solutions very often either require a long list of prerequisites before the first usage, hardware requirements are high and of course, you usually have to pay to use them. With Verdaccio, you instead start out small with a proper default configuration and can then scale or adapt if necessary. A configuration file is created when you install Verdaccio which you can then customize using plugins created by the community. And even if Verdaccio by default is file system based, it’s a limitation easy to resolve using our ecosystem of plugins. You can evolve Verdaccio from a small and straightforward registry to an application scaled to fit large infrastructures using the right list of plugins. Furthermore, we provide Docker and Kubernetes support that make things even easier for companies that use Verdaccio in their development workflows. Why did you develop Verdaccio?# There is a long history behind this project. Verdaccio is one of the multiple forks of sinopia, forked initially by Trent Earl and John Wilkinson after Sinopia was abandoned. I became a regular contributor, and after some months contributing, I got the project’s ownership and evolving Verdaccio into what the project it is today. Among other things, we went from 200 stars on GitHub, 600 downloads per month on npm and 10k on Docker Hub to 2200 stars, 14k and 250k downloads. This rise in popularity would not have been possible without the help of many contributors and especially the core team composed of Meeeeow, Ayush Sharma, Breno Rodrigues and many others. This project is significant for the community and me, and I firmly believe it has to exist as a free and straightforward solution to emulate an npm package system in your company or local environment, as well as it being open source. What next?# In one word – grow. We want to be the most important and most used open source registry, and for that, we have drawn a plan along last year to provide a good base. Throughout 2017 we managed to release several stable versions, ship new releases, improve Docker support, publish a new website with documentation and we have been working on the next major release v3 in parallel, currently in Alpha stage. v3 will provide a bunch of exciting things: Scale: Verdaccio v2 is file system based and that’s a problem if you want to scale, since Javascript is single thread and Node.js only uses one core for each process, a file system does not allow to scale properly. In v3, we are shipping the possibility to replace the default storage with a custom one, either in the cloud (Firebase, Google Cloud or Amazon S3) or any NoSQL database like CouchDB or MongoDB. Plugins: We have improved the plugin documentation to help developers to ship more and more integrations, and we've tried to make the development more accessible with a plugin generator, types definitions based on Flow and more. New Web UI: In the latest version of v2 we have been already shipped a new feature, a sidebar with a dependencies navigator. But we want more. We want to create a UI that allows users to update their profiles, tokens and even the update registry configuration. We would like to enable users to customize the theme as well which may be welcomed by companies that put a strong emphasis on consistency with their corporate identity. API: We will provide more support to the npm API, such tokens, deprecations or stars. v3 still will be entirely backward compatible with sinopia, we want their users to feel comfortable with moving to Verdaccio. What does the future look like for Verdaccio and web development in general? Can you see any particular trends?# Node.js stopped to being a tool only for backend developers a long time ago. These days with Javascript bundlers such as Webpack, Rollup or Prepack, npm packages have become significant - more than 600k in the central registry and much more privately. But not all is perfect, many incidents last years on central registry remind us we need a solution in case this happens again, and Verdaccio is the ideal tool for avoiding sudden development issues, missing packages and can also serve as an offline emergency solution. Verdaccio has lately also been used for E2E testing of npm packages before publishing them to npm, as outlined by Strapi in a medium post. What advice would you give to programmers getting into web development?# Contribute to open source (it will change your life), learn, read books, enjoy and be happy doing your work. Do not try to learn all fancy frameworks, focus on the JavaScript - which is beautiful and comfortable to learn. Teach others, share your knowledge and if you drink coffee while coding, be sure that is from Nicaragua, it's magic. Who should I interview next?# I admire Kyle Simpson and Nicholas C. Zakas. They are great JavaScript teachers, writers, and excellent communicators; it would be great to have them here. Also, I’d like to read about Rebecca Turner (main npm contributor), Zoltan Kochan (pnpm core contributor) or Sebastian McKenzie (Yarn committer). Conclusion# Thanks for the interview Juan! Verdaccio is a valuable service for any company developing JavaScript-based software seriously. To learn more, head to Verdaccio site or check out Verdaccio on GitHub.

Parket - A state management library inspired by mobx-state-tree - Interview with Leah Ullmann

State management is one of those topics that divides opinions. So far we've seen a couple of options so far. In this post, we'll cover Parket, a solution by Leah Ullmann. Can you tell a bit about yourself?# My primary interests right now are web dev, devops, and game dev. I am currently working as a freelance full-stack developer. How would you describe Parket to someone who has never heard of it?# Parket is a state management library; a well-known example would be Redux. It's primarily inspired by mobx-state-tree, which I didn't use because of the large file size. Parket lets you create models with a state, actions, and views; these can later be used by instantiating them and can be nested inside each other. How does Parket work?# Parket internally uses Proxies. Proxy is a newish feature which allows you to wrap objects and manage to get and set access to it; it's like adding a getter and setter to every property but also applies to new properties added. How does Parket differ from other solutions?# A lot of state management libs seem to focus on immutability a lot; every state update has to return an immutable object. I manage mutability via the proxies so you can't set anything outside of actions, you also don't have to return anything or call setState and the likes, because it's listening to the state changes and sends events based on those. A basic example: import model from "parket"; const Person = model("Person", { initial: () => ({ firstname: null, lastname: null, }), actions: state => ({ setFirstName (first) { state.firstname = first; }, setLastName (last) { state.lastname = last; }, }), views: state => ({ fullname: () => `${state.firstname} ${state.lastname}`, }), }); const instance = Person({ firstname: "Tom" }); As you can see, the state gets passed to the actions, which just modify it without doing anything special. The same thing happens with views, but that only read from the state, same as accessing it from the outside, the views get updates on every state change. Anyone familiar with mobx-state-tree will probably see the similarities. You define a model for the state and can reuse it. This is useful mostly for nested models, i.e., todos in a todo list. When instantiating the model, you can pass an object to get merged into the state. I might change this to pass it into the initial function because it can currently override nested objects. I have also adapted the mobx-state-tree TodoMVC example to Parket, which you can find in the repo. Asynchronous example: const Async = model("Async", { initial: () => ({ loading: false, result: null, }), actions: self => ({ async doSomethingAsync() { self.loading = true; // Be aware that you should handle errors ( /rejections ) self.result = await somethingAsync(); self.loading = false; }, }) }); As you can see here, Parket doesn't care what your action does or instead what it is; it just listens to state changes. Why did you develop Parket?# I found mobx-state-tree a while ago and immediately liked it. mobx-state-tree and the dependency to MobX make the file size big. Being in the Preact core team I obviously had to make something smaller, so after failing two times, Parket was born (~1.5kB). What next?# I'm not sure yet, maybe another library when I get an idea, there inevitably will be something. Perhaps I'll go to university soon, so that might be fun. What does the future look like for Parket and web development in general? Can you see any particular trends?# I don't think I'm qualified to predict the future, but I'll try anyway. The one thing I can see happening is more PWAs (progressive web-apps) will get to the market, and with new web features, they can become even more powerful. There are already some fantastic examples of Twitter Lite and the new Instagram PWA. You could use Parket in a PWA, so that's nice. There will also always be new frameworks, some worth looking at, others not so much, but it's all in the name of progress. I hope we'll get something innovative sometime soon, Jason (@_developit) talked about how visual programming could be used for the UI instead of the component based frameworks we use now. What advice would you give to programmers getting into web development?# Go to a coding boot camp if you can, or some other place where you have a teacher you can ask when you have problems. Freecodecamp and the community around it are also great, but I only found that after I already knew most of what they teach. Not specific to web, but: Find a problem and solve it, even if it has been solved before Who should I interview next?# Maybe zouhir or lukeed. Any last remarks?# Thanks to the Preact community for being so great, wouldn't be where I am today without them. Conclusion# Thanks for the interview Leah! It's always nice to see new approach to state management. You can find Parket in GitHub.

substyle - Build Styling Agnostic Components for React - Interview with Jan-Felix Schwarz

One of the tricky things about writing React components meant for public consumption is making them compatible with various styling approaches used by the community. The problem exists because application styling isn't considered as a first-class citizen by React and it doesn't provide a strong opinion on how to solve it. As a result, the amount of available approaches has exploded. Jan-Felix Schwarz noticed the same problem. As a result substyle was born. Can you tell a bit about yourself?# How would you describe substyle to someone who has never heard of it?# substyle is a utility for authors of open source React component libraries. It tries to make it easier to build components in a way that allows users to customize styles of every single element rendered by a component. Users will be able to do that through CSS, CSS Modules, many css-in-js libraries, or using inline styles. This way, the component integrates well into applications using any styling approach, without forcing an opinion about tooling. How does substyle work?# substyle provides a higher-order component that preprocesses whichever props the user passes for styling purposes so that they become more comfortable to consume. It injects a single, special style prop, which is used in the wrapped component's render function to derive the right styling props to forward to each of the rendered elements. For example, a universally stylable <Popover /> component could be written like this: import substyle from "substyle"; const Popover = substyle(({ style, children }) => ( <div {...style}> <button {...style("close")}>x</button> {children} </div> )); Now, users of the <Popover /> component can pass their custom className, which will be used to derive classes for all the elements rendered by the component: // JSX // Rendered HTML <Popover className="popover"> {" "} // <div class="popover"> <span>Hello world!</span> //{" "} <button class="popover__close"></Popover> // x //{" "} </button> // <span>Hello world!</span> //{" "} </div> If they want to pass some custom inline styles, they can do so by supplying a nested style object: // JSX // Rendered HTML <Popover style={{ // <div style="background: white;"> background: 'white', // <button style="right: 0;">x</button> close: { right: 0 }, // <span>Hello world!</span> }}> // </div> <span>Hello world!</span> </Popover> If they use css modules or some css-in-js lib, they will want to pass the unique, auto-generated classes to assign to the elements. They can do so via the classNames prop that is handled by substyle: // JSX // Rendered HTML <Popover classNames={{ // <div class="1n3n1g"> popover: '1n3n1g', // <button class="ew339k">x</button> popover__close: 'ew339k', // <span>Hello world!</span> }}> // </div> <span>Hello world!</span> </Popover> How does substyle differ from other solutions?# I know of one other solution addressing the same problem called react-themeable. The general idea behind both, react-themeable and substyle, is the same. However, during the development of a component library at Signavio I had to solve some additional practical challenges: How to define default styles for components? How to build composite components so that also all leaf elements of nested components can be styled by the user? If, depending on the passed props, there are different variants of a component, how to allow the user to define custom styles specifically for a particular variant? Exploring solutions to these problems I finally ended up writing my utility. Why did you develop substyle?# I got the initial idea for it while developing an open source React mentions input. As I was aiming to let users style this input widget with css and inline styles, I had to add quite a bit of code to my components just for this purpose. To keep my code DRY and the render functions clean, I extracted this repetitive styling logic into a helper function. Later I realized that I could quickly add support for styling through css modules and css-in-js libraries, just by changing this helper function and without having to touch any of the components. And this is basically how substyle came to be. What next?# I hope that the idea of supporting universal styling takes hold in the React community and that we can establish some best practices for writing reusable components. It would make app developers' lives better as they would not have to study docs, examples, or source code of every single component library to find out how to override styles of particular elements. Instead, they could just use the same familiar styling API for any open source component. What does the future look like for substyle and web development in general? Can you see any particular trends?# substyle is just my take on a universal styling API for React components and it demonstrates that it is quite easy to implement this. So I don't know if substyle as a library will have a future, but I hope that we will continue the discussion about the styling of reusable components. For web development in general, I see much more fundamental trends: One hot topic is the shift from frameworks to compilers. I believe this idea has enormous potential and it's exciting to see projects like Prepack and svelte pushing forward this frontier. Another development I expect for the next years is that the architectural boundary between client and server will become more and more blurry as server rendering and GraphQL APIs become the norm. We will be able to share much more code between front and back ends, up to a point, where this distinction is rendered useless. What advice would you give to programmers getting into web development?# Be more passionate about what you are building than how you are making it. Don't choose libraries and frameworks just because they are hyped, but because they promise to solve a particular problem that you are feeling. I think this helps to embrace that there is so much choice in the JavaScript ecosystem, rather than feeling overwhelmed by it. Also, don't be intimidated by unfamiliar, complex-sounding jargon. Usually, it's just fancy names for simple concepts. Who should I interview next?# I dig the stuff Brent Jackson (@jxnblk) is building. He's both, a great programmer and designer, and his work is right at the intersection of both disciplines. Conclusion# Thanks for the interview Jan-Felix! substyle looks like an excellent fit for anyone wanting to write robust React components that are easy to consume. You can find substyle on GitHub. See also Jan-Felix's presentation (16 mins) on the topic.

Experiences on AgentConf 2018

I was invited to AgentConf 2018 on its second iteration about a month ago as one of the organizers saw a presentation of mine about npm packaging. I gained free entry to the conference against a lightning talk but more on that later. The concept is simple. After two days of single track talks in Dornbirn, there are two days of skiing (alpine, not cross-country) in Lech. Skiing is optional, but at least for me, it was the highlight even if the talks were good quality. There were around 180 people in the main conference, and roughly 50 remained for skiing. The Arrival Day# Dornbirn at night As a large part of the hotel capacity of Dornbirn was taken, I decided to stay at a local Airbnb with a friend that was going to the conference as well. We shared the costs and felt it was good value. The train trip from Vienna took six and half hours, but given the quality of train service in Austria, the travel didn't feel cumbersome. The train network worked well, and the price level of the restaurant was reasonable at least compared to what I'm used to personally. The quality was excellent as well. After we arrived at Dornbirn, we dropped our bits to Airbnb and headed to the center. From there we went to Panoramarestaurant Karren with a cable car for the speakers' dinner after a short trip by car. It was a great way to start the conference, and I met several people that I knew online already. The food was amazing, and the views were great. Especially the mushroom soup (steinpilzsuppe) was a favorite of mine. There was some complication on the way back, and after the cable car came back down, we ended up walking back to our Airbnb. It wasn't a long distance, and it didn't matter that much after a satisfying dinner. The First Presentation Day# Breakfast at Spielboden kino The first day started with an hour-long breakfast and registration at 8:00. That was early enough for me at least! The breakfast was disappointing, though, as there was only single kind of bread and something to drink with it. Fortunately, this was fixed on the second day. Max Stoiber on Open Source# Max Stoiber on open source Max's presentation covered his successful career with open source so far. It was interesting for me to contrast it with the one by Evan You on ScriptConf as the take was somewhat different. Whereas Evan's presentation felt more grounded to the rough reality, Max's was more on the lighter, optimistic side. The viewpoints complement each other. I think the key understanding is that open source is not an end itself but more of a means. The whole situation changed in two decades as first the industry resisted the idea and then ended up adopting it as a mainstream idea. For me, there's not much left to sell in the idea, and I'm more interested in finding sustainable models as I feel that's where we still have work left to do. Guillermo Rauch and Leo Lamprecht on Folding Space and Time# Guillermo and Leo on folding space and time Guillermo and Leo discussed how their company Zeit approaches scalability. Once you understand what takes time in your requests, you can start to think about where and how to perform the work. pkg was one of the highlights of the presentation for me. As it happens, precompiling your code with Node.js can speed it up considerably. One benefit of doing this is that then you can run your application without having to install Node.js although you still have to compile somewhere. Peggy Rayzis on Apollo# Peggy on Apollo After a coffee break, Peggy gave a talk on Apollo. I've seen Peggy present twice before. Even though there tend to be similar elements in her Apollo talks, you also learn new things as their GraphQL client keeps evolving. It seems that at least in some cases you might be able to eschew state management solutions like Redux entirely by using something like apollo-link-state instead. It would not surprise me if this trend continued although you lose some control in the process. Kaylie Alexa Kwon on Yarn Workflow# Kaylie on Yarn Kaylie discussed Yarn package manager and how they use it in their company. It was a good talk given it showed how she got involved with the project and also about the impact she has made on it so far as an outside contributor. The important point of the talk for me is that you should find tools that fit your process. And if there's something missing, you should look into improving the tool. Yarn seems to be open to improvements as a project. The development has motivated npm to become better as well, so everybody has won. After the talk, there was a lunch break although I don't remember what we had for lunch, but I am quite sure it had beans in it, but it wasn't a memorable one. Carly Litchfield on Testing# Carly on testing Although I missed the majority of Carly's talk, I did gain a few insights from it. I learned particularly of Percy, a tool for visual regression testing. I was aware of the technique, but I didn't know there's such a good solution available yet. Carly has made her demo application and presentation slides available. Andrey Okonetchnikov on Linting# Andrey on linting Andrey discussed his story with lint-staged so far. The plan was to get a particular feature done by the presentation, but no matter how hard we tried, there was always some edge case we couldn't manage to resolve. Alas, development has to continue. lint-staged is interesting because it allows you to run commands on the files that are only in the staging mode of Git. Doing this can save a considerable amount of computation and make development flow smoother. The beautiful thing is that although the tool has been written using JavaScript, you can run the tool against any other language since it operates on a command level. Since I know the tool already and helped Andrey with the slides, there wasn't anything new in the presentation for me. But I'm sure people that don't know it yet, gained a lot from the talk and it was pleasant to follow. Javi Velasco on Agnostic Component Design for React# Javi on React components Javi discussed his journey with React Toolbox. It is an implementation of Google Material Design for React, and he is currently working on the next major version. I know from experience API design is hard to get right, so it was a fitting talk. I had seen it before in ReactiveConf, though, so there weren't many new insights for me. When it comes to APIs, the key point for me is figuring out the right coupling and responsibilities for each part of the API. An API should be solid on a conceptual level so that it's easy to explain. Lightning Talks# Benedikt on strange features of JavaScript The lightning talks were the part where I got my ten minutes of fame or so. I was supposed to go first, but due to some misunderstanding, a local team went on the stage instead. I didn't follow their twenty-minute presentation too closely, but I did learn that if you wrap a web application in a desktop shell and it looks roughly the same, the enterprise clients won't care. My presentation was about static sites. I built my first one over twenty years ago and decided to revisit that era while discussing why static sites and static site generators are so relevant these days. See the resulting site at juhoshomepage.com. Creating an expanded version of this talk would be fun. After me, Patrick Stapfer discussed ReasonML and how he uses CSS Modules with it. I was still too excited about my time on the stage, so I don't remember much else. Finally, Benedikt Meurer discussed strange features of JavaScript. The core point was to avoid using with anywhere ever as it's a horrible feature. Incidentally, it's disabled in the strict mode. The problem is that you cannot remove features from the language as that would break the internet. Therefore it can only gain features. Dinner and Afterparty# The dinner and afterparty were organized at the venue. The venue itself was an old cinema and fit the event quite well although the atrium was a little narrow and forced people to two floors. Apart from that, it seemed to work nicely, and usually, there was enough space. I don't have much to say for the dinner, but I'm sure it was something in an Italian style. The afterparty felt weak, so I decided to join other speakers at the center of the city in a cozy little restaurant. I'm not a great fan of afterparties, so not much was lost. The Second Presentation Day# Spielboden kino The second day of presentations started with breakfast as well. This time around there was more variety to choose from although this time around I was better prepared by bringing something to eat myself. Sara Vieira on Depression# Sara on depression The first presentation of the day was about depression by the famous Sara Vieira. I know her personally, so it was interesting for me to hear how she ended up where she is right now and the effort it took. It was an interesting choice for a keynote although the topic itself was highly important. People tend not to discuss mental health in public, so it was a good opening. Given it's a massive topic and demands discussion, I would have left it at the end of the first presentation day to give people space. I feel it took some attention away from the following one although maybe that's just me. Nacho Martin on Integrating Redux with a Server# Nacho on integrating Redux with a server Nacho Martin, an internet acquaintance of mine, discussed how they are using Redux and integrating it with their Elixir server. The key insight for me was that if you consider what your real problem is, then it's easier to solve it. In their case, they had a problem with duplicating logic over both server and client. If I understood right, the way to solve this was to simplify the state management and push a large part of this to the server. Benedikt Meurer on TurboFan# Benedikt on TurboFan After a coffee break, Benedikt discussed V8's new engine, TurboFan. The talk made me appreciate all the work Google puts into it. JavaScript isn't a straightforward language to optimize yet they keep finding ways to achieve that, and there's more in store. One of the learnings for me was that it's better to write close to standard instead of trying to optimize code yourself. The interpreter can optimize the execution likely better than you can. Michel Weststrate on Reactivity# Michel on reactivity Michel gave a skiing themed variant on his MobX talk. It was an excellent introduction to the topic, and I'm sure it inspired people to try it. A solid talk. This time around, the lunch was a burger. Both meat and vegetarian options were provided. Also, there was dessert for speakers at least. The lunch was much better than on the first day. Emil Sjölander on Yoga# Emil on Yoga Emil discussed Yoga, a cross-platform layout engine. I didn't get much out of the talk, but that might have been due to the excellent lunch, not the presentation. Kristijan Ristovski on State Management# Kristijan on state management Kristijan, also known as kitze by the community, discussed the concept of rock stars, trends, and following them. The points he made were fair. Instead of going with the popular option, it makes sense to consider your options and constraints, and only then make a decision. Going against the mainstream is an option too. Most importantly you should focus on providing value. Wrong thing done right is still the wrong thing. Sia Karamalegos on React Performance# Sia on React performance Sia discussed React performance and provided multiple viewpoints on the topic. It was a good overview and you can check out the slides online for the primary ideas. The topic is ideal for a small workshop. Asim Hussain on Bots# Asim on bots Asim gave the last presentation of the conference about bots. The point was simple. A certain famous American president writes tweets where sentiment analysis can be applied. Given he is an influential figure, the argument is that this affects economics. If you created a bot that uses the technique and then trades, you would be able to make money. Asim did simulated trades based on this. Although the bot wasn't a great success, it still proved the point. Asim's talk helped to show how emerging techniques will change computing in the coming years and it was an elegant way to end the spoken part of the conference. After Asim's talk, we headed to Lech for skiing. The First Skiing Day# Lech in the morning Given I hadn't been skiing in twenty years (just cross-country), I decided to play it safe and go with the beginner group. It didn't take long for my skiing instincts to kick in, though, and it began to feel comfortable by the end of the day. The Second Skiing Day# Before the epic ride I decided to tackle the most prominent hill near the starting point in the second morning. The conditions weren't as sunny as during the first day, but they were still quite good. The ride was epic, and after that, I headed back to a smaller one to chill out before heading for lunch with other attendees. Lech proved to be surprisingly expensive (2-3x Vienna), so therefore it's not the best place for a cost-conscious person. It was still nice to visit and experience. The Good, The Bad, The Ugly# Lech in sunset Overall, AgentConf was great, and I enjoyed it a lot. Skiing was the highlight for me although I'll prepare better next year so I can spend more time on the more challenging slopes. I was physically fit for it, but it takes a certain amount of adaptation to get most out of skiing. The presentations had high quality although I might have used a different order. I wish the second day had lightning talks as well and I would have loved to see panels in the program. One of the neat things AgentConf did was that they got Christoph Nakazawa to host them. If the audience was feeling shy, he had a few questions in store. I think it improved the quality of the conference a lot. When you visit a lot of conferences, it's easier to see what's missing. Compared to ScriptConf, I noticed I was missing an MC although Christoph compensated for this well. Still, having strong audio in place seems to help with the ambiance a lot which was interesting for me to notice. Food was excellent especially for the speakers' dinner and the second day. For some reason, the first day felt weaker when it came to this although I didn't have to go hungry. Conclusion# The organizers of AgentConf I might go to AgentConf again next year, and it was one of the better technical conferences I've ever been. There are always little details you can do better, but the primary offering is solid and good value. I spent more money on the trip than I would have liked as I didn't expect Lech to be so expensive. For me, even a less fancy place to ski would have been more than enough especially given I'm far from the level in which I can enjoy the most challenging slopes. If you want to go to a good conference and enjoy skiing, AgentConf is a great choice. I like the idea of combining high quality technical content with leisure and I wish more conferences followed this route. You can find more of my AgentConf 2018 photos at Flickr. See also the official photos.

Experiences on ScriptConf 2018

One of the benefits of living in Vienna is that it's easy to reach central Europe and its conferences. Given Linz is close to Vienna (about 90 minutes by train), I decided to visit ScriptConf. ScriptConf is a JavaScript themed single-track conference, and the promise was great talks and great food. That was an offer I couldn't resist, and as a result, I found myself in Linz for a day. It's likely I'll visit again as the city was enjoyable and it felt like it has more to offer. The conference was split into two days. The first day was for workshops and the second day for the presentations. I participated only in the latter day. The Beginning of the Presentation Day# The beginning of the conference The curious thing about ScriptConf was that the official program started at 13:00. There was an hour for registration before that. I came from Vienna on the morning train, and that left me time to explore Linz and make some friends. I met a couple of other developers going to the conference before registering and we had a chance to get some beverages and breakfast to eat. In retrospect, I should have eaten a proper lunch before the event given the first official coffee at three o'clock didn't have anything salty in it. As a result, we left the conference venue and found bosnas for ourselves. It was a new experience for me, but I'm glad we made this move as I needed the salty bit although the cakes provided by the conference were tasty as well. Evan You on Open Source# Evan You The day itself began with a presentation by Evan You, the author of the popular Vue.js UI framework. It was about his journey into open source, and I think it was a fitting way to start the day. I could recognize many of his struggles and especially his version of hype cycle for open source development resonated with me. Each project has its momentum that it either sustains or loses. More importantly, there's the personal side. As a project gains popularity, it has to deal with the pressures caused by this reputation. For some reason, the entire day was riddled with small technical problems, and this caused the schedule to slip at times. It wasn't a big problem but something a little annoying especially given the day started so late. Marcy Sutton on Accessibility# Marcy Sutton The day continued with Marcy Sutton's talk on accessibility. I feel this is an important topic that needs more attention from the web development community. Often it's an afterthought if it's given any thought at all. I became aware of aXe tools, and I'll use the Chrome plugin in the future. I feel the talk would work exceptionally well in a workshop format as then you get to test the tools and see their impact on accessibility. Simona Cotin on Serverless# scriptconf-2018 Simona Cotin discussed the phenomenon of Serverless applications. It was an excellent overview of the topic although I'm not that interested in Azure myself. Perhaps something more platform-agnostic would have fit the conference better. Michaela Lehr on Augmented Reality# Michaela Lehr Michaela Lehr covered the rise of Augmented Reality (AR) and related technologies (Virtual Reality (VR), Mixed Reality (MR)). The beautiful thing about the talk was that it gave a good idea of the potential and future. It will still take years before we see real mainstream adoption. Now is the time to experiment. André Staltz on Cycle.js# André Staltz André Staltz approached his Cycle.js framework from a refreshing angle of paper coding. Instead of focusing on code, he focused on graphs to get to the concepts behind Cycle.js. Although Cycle.js was already familiar to me, the talk drove down the key ideas even further and explained the recent improvements which allow you to treat your applications as fractals - applications of applications. The dinner provided after André's talk was adequate compared to some other technical conferences although it didn't reach the advertised level at least for me. But then, you don't go to these events to enjoy the local food. That's why restaurants exist. Phil Hawksworth on Next Wave Infrastructure# Phil Hawksworth Although the talk had a dry premise, Phil Hawksworth's talk was one of the better ones of the conference. It made me even more convinced that static sites enhanced the right way is an excellent way to develop websites. The technology is maturing, and it provides even more benefits than I knew. The ability to treat each Git commit as deployment is a simple yet powerful idea as it allows quick visual inspections during the development process. The fact that you can complement a static site with dynamic elements takes them closer to the CMS space, and you could claim that there's a significant overlap between the two. It's no wonder we have the category of static site CMS's these days as a result. Charlie Gerard on Mind Control# Charlie Gerard Charlie Gerard discussed how to control JavaScript using your mind. That is, how to achieve this using a specific device. She showed how she acquired equipment and developed a Node.js API for it. The presentation was made even better by live demonstrations that drove down the points. The Good, The Bad, The Ugly# The MC Overall, the second iteration of ScriptConf was a cool conference. Especially having an MC on the stage was a good idea and that's something I hope other conferences will copy as it improves the atmosphere surprisingly much. The format of long presentations along the day felt a bit much at times. I would have appreciated lightning talks in between to get access to more ideas. The problem was made even worse by the late start time and technical delays. Although the primary space of the conference was roomy, the place where you registered and ate felt too small. One way to solve this would have been to use the area on both sides of the venue to split the problem. The need for space might be a cultural issue, though, as I'm used to having room. Conclusion# Linz at night I am happy I went to ScriptConf, and I feel it was good value. I spent approximately 200 euros for the entire trip, and if nothing else, I got exposed to new ideas and people. I might do this again next year. You can find more of my ScriptConf 2018 photos at Flickr. See also All you need to know: Script18 #scriptconf and Script18 - Impressions and Recap for other reports.

Illuminate - Syntax highlighter for Node - Interview with Vivek Bansal

One of the core features of this site is custom syntax highlighting. I had to figure out ways to deal with custom syntax provided by Leanpub. Initially, I implemented a solution based on PrismJS, but I wasn't entirely happy with it, and the frustration led me to look into alternatives. That's how I found Illuminate by Vivek Bansal. Can you tell a bit about yourself?# Flipkart, India. I started as a PHP developer and later switched to JavaScript/NodeJs full-time and I have nearly five years of professional experience. I firmly believe in open source philosophy and try to contribute to open source projects regularly. How would you describe Illuminate to someone who has never heard of it?# Illuminate is a syntax highlighter which can be used to highlight code snippets in HTML files. It is based on already popular syntax highlighter PrismJS. It can be easily integrated with tools like markdown-it. It can also be used with ReactJS via react-illuminate. How does Illuminate work?# Similar to PrismJS, It works by creating a Token tree, by matching the code string to a given set of Regular Expressions, called a Language definition. Later, The Token tree is again converted to a string by wrapping the code in span tags and add appropriate class names. How does Illuminate differ from other solutions?# Illuminate was re-written from the ground up in ES6, so that it can be used in NodeJS and browser with the help of tools webpack, rollup, etc., With react-illuminate, it can also be used with ReactJS in the "react way", without using dangerouslySetInnerHTML. Why did you develop Illuminate?# While working on my website, which is statically generated, I wanted something that can be used with markdown-it on NodeJS. I was already familiar with PrismJS and its inner workings. I had proposed the change in PrismJS itself, but the maintainers were not interested in it. Hence, I started working on my alternative. What next?# Make it stable and add support for other frameworks/tools like Gatsby, Vue, etc. What does the future look like for Illuminate and web development in general? Can you see any particular trends?# I believe that compile-to-javascript languages like TypeScript, ReasonML, Flow, etc., will see broad adoption. Type safety will become the first-class citizen of the web world. What advice would you give to programmers getting into web development?# Do not limit yourselves to a particular framework/paradigm. Keep pushing your limits. Conclusion# Thanks for the interview Vivek! I have been happy with Illuminate so far and hope others find it useful as well. It's a great little project that deserves kudos. See Illuminate on GitHub.

Logux - Replace AJAX-REST - Interview with Andrey Sitnik

When you build a web application, you often have to communicate with a backend. It's not uncommon to do this using AJAX against a RESTful API. Logux by Andrey Sitnik is one possible alternative. Can you tell a bit about yourself?# Now I am a digital nomad and lead front-end developer at Evil Martians. Most of the readers will know me because of my open source projects: autoprefixer and PostCSS. How would you describe Logux to someone who has never heard of it?# Logux is a JS library and Node.js server to replace AJAX requests. It synchronizes Redux/Vuex actions between clients and server (each, Redux actions on the server 😆) and between clients. In Logux you don’t need to write Redux Saga, calling fetch(), show loader during a request, handling network errors and worry, that you don’t support offline-first and push updates. In Logux you use dispatch.sync(action) instead of dispatch(action) and Logux will send this action to a server and other clients. At least, we want to have simple API in Logux. In fact, it is still experiment (current version is just 0.2). I was inspired by ideas of CRDT and distributing computing. Right now we still need to understand how to present this remarkable thought in a better way. How does Logux work?# Logux core is a JS library to synchronize actions log between two machines (there is no client and servers in Logux protocol, it is peer-to-peer protocol). By default, it uses WebSockets to maintain a connection (you can change connection mechanism), and it can store actions in different stores (memory, IndexedDB). This core also takes care of an essential thing in distribution systems: the time. For example, Alice did not have Internet for 30 minutes (NY metro doesn’t have the network in trains). But good applications allow changing documents in offline so she can change document and get a connection only 30 minutes later. 30 minutes is an extended period, and other clients can alter the same document during this time. Because of this, we have to merge changes and fix conflicts. Don’t forget that Alice’s phone could have wrong time to make things worse. Yep, a distributed system can sometimes be complicated. Logux core will mark every action with particular time mark to handle the problem. Also, it will calculate the time difference between client and server so it will be sure what action was the last. On top of this core, we have few packages with an end-user facing API: Logux Redux wraps all Logux Core magic to Redux-compatible API. At any moment Logux Server could send action and put it inside your history (for example, Alice finally got WiFi and sent her changes from the metro ride). Logux Redux will undo all Bob’s newest actions, add that Alice action “from the past” and replay all Bob’s actions again. Or in any moment server could send “undo” action (for example, you changed your login in offline, but this login was taken, and renaming could not be applied anymore) and Logux Redux will remove this action from history. Logux Vuex does the same for Vue and Vuex. Logux Status contains widgets and UX best practice to show current synchronization process to the client. With Logux you can implement Optimistic UI. It updates UI immediately after “Save” button click. If a user doesn‘t have connection Logux Status will show the widget with “Your changes were not saved on the server, connect to the Internet to save them.” Logux Server is a Node.js framework, but Logux protocol is open. Logux server will be similar to most of Node.js web servers. But instead of REST, URLs and forms, you will have Redux/Vuex actions. And some of this actions came from the past. You need to check action’s created time. Right now, our primary challenge is to provide better API to clean log. I told you about a shiny utopia about adding actions to the log. But we also need to clean old actions, which are not actual anymore. For example, if you renamed user from Old name to New name and saved this changes on the server, you don’t need old action with Old name anymore. Current cleaning API is decent, but we could do better by focusing more on modern developers. Not only on distributed system scientists. How does Logux differ from other solutions?# It is easy to compare Logux with AJAX 😋. With Logux you don’t need to handle network errors (Logux Status will show error widget, Logux Redux will save action until a user gets a good connection). Don’t need to make loaders during saving the changes (You can update UI right after “Save” button click). In many cases, we need Redux Saga for AJAX. In Logux you just dispatch action and Logux will take care of sending to the server and show synchronization process to the user. But with less code, you will get more features. You will get push updates out-of-box. When one client dispatch action (like user renaming), a server will resend this action to other clients (Logux uses channels and subscriptions to control who is allowed to receive actions). Also, you will get basic offline-first support. New actions will be applied immediately to client UI, but then they will wait for Internet connection in IndexedDB. Of course, for good offline-first support, you need to take care of merging conflicts (when two users changed the same document). And Logux cannot fix all conflicts for you because it depends on business logic. But Logux will help you here by taking care of distributed time and Redux state time-traveling. Of course, there is no sense you use Logux in simple web pages. With 2-3 requests, it is better to use AJAX. And of course, AJAX is still better for some unusual cases, like sending big files. But, I think, in big applications, AJAX is not a competitor for Logux. It is more interesting to compare Logux with some modern solutions. For example, GraphQL and Apollo. Having these technologies with many great ideas inside is great. GraphQL is more focused on requesting the data, though. Mutation doesn’t have correct distributed time marks. Optimistic UI and subscriptions still need more code. In contrast in Logux by default, your React components will be subscribed to data updates. Optimistic UI is out of the box. CRDT could be implemented much simpler. On the other hand, GraphQL works better with PHP, Ruby or Python, because it does not require a WebSocket connection. Also, Apollo is much stable and ready-for-production solution. Right now I don’t recommend Logux for big projects. GraphQL will be much better for them. I am making Logux for future beyond GraphQL. Why did you develop Logux?# I was tired to write 50+ lines of code to save simple React form 😧. But also I believe in the better world. In the world when all web applications will have push updates and offline support. Wireless connection is always unstable. Especially for next billion of the users. I was tired of pressing Reload button on any network problem during AJAX request. We have lousy networking in applications, not because developers are lazy. My forms were bad too 😅. So my dream was to have less code with better networking. When I saw talk about Swarm.js, I was so excited how simple and powerful is the idea of CRDT. But it was not so easy to combine CRDT with Redux because Redux and Swarm.js have separated actions logs. And when we drink with Dan Abramov in the bar, the simple idea was created. Logux idea is to use one actions log for everything: Redux, CRDT, networking. What next?# First, I need to write good docs and guides for Logux 0.2. Next, I will think about Logux 0.3: more syntax sugar for log cleaning, improve API according to practical experience and user feedback. What does the future look like for Logux and web development in general? Can you see any particular trends?# I think GraphQL, Apollo, Firebase, gun.js show the simple trend: next revolution will be not on the client or the server. Next revolution will be in client-server communications. We have so many great things to client-side development but right now when you need to write AJAX request you are going to the old jQuery-like world. With PWA we will have more mobile web applications. But mobile users expect better networking from your web app. Push updates and offline support are standard in iOS/Android world. If we want to compete with native applications, we should make our web application smarter. What advice would you give to programmers getting into web development?# Software development should make people happy, not solving tasks. If you are making a tool, think about DX, not only about features. If you are making an app, the user experience is more critical than framework and technologies. Who should I interview next?# Andrey Popp is one of the most underestimated React developers. Victor Grishenko is one of the best-distributed system scientists. His Swarm.js was the main inspiration for Logux. Nikita Prokopov is other great distributed systems engineer. Conclusion# Thanks for the interview Andrey! You might be right in that we'll see improvements next when it comes to server communication. If you want to learn more about Logux, consider the following resources: Andrey's talk about Logux with code examples Logux Redux Logux Server @logux_io on Twitter

Fastify - Fast and low overhead web framework for Node.js - Interview with Tomas Della Vedova

Servers, servers, servers. I've written a lot of Node.js servers since I began using it. Initially, I went through the API it provides but after a while most of the community settled on using Express. In this interview you'll learn about an alternative by Tomas Della Vedova. Fastify has been designed performance in mind. Can you tell a bit about yourself?# How would you describe Fastify to someone who has never heard of it?# Fastify is an opinionated web framework for Node.js; it focuses on performance and low overhead. The architectural pattern that we used to build it enables microservice ready applications. The core is small and it exposes powerful APIs to extend it with all the functionalities that are needed. How does Fastify work?# Fastify is the handler function that you pass to the HTTP core module, nothing more. We started building it from scratch, adding one feature at a time. For every new feature, we worked a lot on the optimization and lowering the overhead of the feature, trying to reach the "almost zero" overhead. Fastify supports our of the box Hooks, express style middlewares, decorators, HTTP2 and async-await. const fastify = require("fastify")(); fastify.get( "/", async (request, reply) => ({ hello: "world" }) ); fastify.listen(3000, function (err) { if (err) { throw err; } fastify.log.info( `Server listening on ${fastify.server.address().port}` ); }) We have extracted from Fastify all the code that could be separated from the framework itself and used in other situations, for example in our router, the serialization library and the middleware engine. We released them as separate libraries that don't need Fastify as a dependency, so you can use them in your current framework as well, or even build one just for your needs! How does Fastify differ from other solutions?# Given one of the core goals of the project is performance, we do not land any feature if the implementation isn't well optimized and the cost that we pay is as low as possible. Fastify has a robust plugin system, it guarantees the load (and close) order of the plugins and creates a zero cost encapsulation to help the users maintain a clean and ordered code. It will also help the user to write decoupled code and use a different version of the same plugin (or maybe with different configurations) in a different subsystem of the application. A similar approach with Express would cause the performance to drop significantly for each nesting level. Furthermore, the plugin model is based on reentrant locks and given it's graph-based, Fastify handles asynchronous code correctly while guaranteeing the loading order and the close order of the plugins. The plugin system creates a direct acyclic graph, and in this way, it is impossible to create cross dependencies, and you can use a different version of the same plugin in different parts of your application. Directed acyclic graph Thanks to this architecture it is easy to split your application in multiple microservices because we'll help you with the creation of a system where the separation of concerns and cohesion are two essential keys of your application. Directed acyclic graph services Why did you develop Fastify?# Almost one year and a half ago me and Matteo, the coauthor of Fastify, started working on a nice project, fast-json-stringify. By doing different performances analysis we discovered that serialize JSON is very expensive, so we asked ourself, can we make it faster? We worked for 1-2 months, and we built fast-json-stringify, which is 2x-3x times faster than the native method (spoiler alert, we use JSON Schema). const FJS = require("fast-json-stringify"); const stringify = FJS({ type: "object", properties: { user: { type: "string" }, age: { type: "integer" } } }); console.log(stringify({ user: "tomas", age: 24 })); We were pleased with the results, so we started optimizing other parts that usually are pretty expensive. Routing, hooks, middlewares and so on. After some time we put all together, and Fastify was born. We wanted to challenge ourselves to build an extremely fast web framework, with the goal to get very close to the performances of a plain node HTTP server. What next?# Currently, we are close to the version 1.0.0. We are focusing on fixing the last bugs, and we are listening to feedback from our early adopters. In this way, we can try to meet their needs and handle breaking changes. We are also updating the API plugin to allow the users to declare their dependencies and provide better support for async-await. An example of how async-await works in Fastify: server.js async function build (opts) { const fastify = require("fastify")(opts); fastify.register(require("fastify-helmet")); fastify.register(require("fastify-mongodb"), { url: "mongodb://mongo/db" }); fastify.register(require('./lib'), { prefix: "/v1" }); await fastify.ready(); return fastify; } lib/index.js async function plugin (fastify, opts) { const { db } = fastify.mongo; const collection = db.collection("users"); // you can reach this route with `/v1/user/:id` fastify.get("/user/:id", async (request, reply) => { try { return await collection.findOne({ id: request.params.id }); } catch (err) { reg.log.error(err); return new Error("Something went wrong"); } }) } module.exports = plugin; We want our community to continue to grow, so every time a plugin creator sends it work to us, before adding it to our "official" plugin list we help them to improve their code (if needed), and enforce a correct use of our API. We are also constantly updating the documentation with all the hardest parts or our architectural decisions. For example, we wrote the hitchhiker's guide to plugins to help users understand the architecture of the framework and how to use correctly the APIs that we expose, and we have just updated our getting started guide. What does the future look like for Fastify and web development in general? Can you see any particular trends?# I hope it looks shiny! Joke apart, one of our core design decision, is that Fastify should provide a lightweight and small core that is easy to extend with plugins. Probably most of the work we'll do in the future will be in this direction while exposing new (and low overhead) APIs to the plugins creators and help them to create valuable plugins. Regarding the future of web development I think that progressive web apps, AI and internet of things will play a important role. This is why with Fastify we created a "batteries not included" framework, we want to help developers build the applications they need by using the code they need. I hope that the open source world will continue to grow massively as its doing right now, and that developers and companies will continue to release their work, in a way that everybody will continue to grow as a group, where we all help each other make valuable code to help people. What advice would you give to programmers getting into web development?# Try. The better way to learn new things is to try them. A book or a workshop can help until a certain point, but if you want to really understand how something works, just write it. Get your hands dirty. If you have some problem with a library or have a question on how approach to a pattern or technology, ask. But remember to be always kind with others, we are all human beings and the way we interact each other is important. If you open an issue be kind, thank for the work that has been done, explain your problem and if you can, propose a solution. It will be appreciated. Contribute to open source, even with small things. The open source world is amazing and as much you give as much you get. It's hard to measure how much the open source world gave to me; it helped me to be a better developer and a better person. Do not be discouraged by others experienced developers, everyone has been young and everyone will help you, as well as you will help other young developers in the future. Who should I interview next?# Yoshua Wuyts, creator of Choo and many other cool things. Conclusion# Thanks for the interview Tomas! Fastify looks like something I should try on my servers. You can learn more from Fastify site or Fastify GitHub.

BEM - Methodology to enable reuse in front-end development - Interview with Sergey Berezhnoy

Developing large scale applications requires a certain amount of discipline. Sometimes it is enforced by the environment; sometimes you have to apply it yourself through conventions. Likely both are needed to some extent. As applications grow in complexity, the need for clear architecture grows unless you want to end up with a big ball of mud or a similar disaster. To learn more about the topic, I am interviewing Sergey Berezhnoy, one of the authors of BEM. Can you tell a bit about yourself?# Yandex since 2005, and I participate in the development of such Yandex services like Search, Mail, Blog Search, Yandex blogging platform, Video and Images searches. Along with service development created internal tools for web development. I am one of the co-authors of BEM. How would you describe BEM to someone who has never heard of it?# BEM is an architecture pattern allowing to achieve flexible and maintainable code. It's a way to make your code self-descriptive and predictable keeping everything consistent and familiar to all the developers on a project. And all you need to achieve this for literally any interface is just a few concepts: Blocks to split an interface into components Elements to split complex blocks into parts Modifiers to express state Mixes to have different blocks or elements on the same DOM node Redefinition levels to build a project layer by layer avoiding copy/paste. How does BEM work?# The idea behind BEM is similar to Web Components or any other component approach to web development. Grasping sizeable complex system at once is hard. Developers may split it into simple reusable blocks which are easy to use and maintain. And then they can be used as Lego bricks to build anything. BEM provides best practices for that as well as ready-made tools and block libraries. How does BEM differ from the other solutions?# First, BEM is just a concept (similar to OOP). The main power of BEM is that it works for any tech (HTML, CSS, JS, tests, documentation, etc.) and everything can be described with just a few simple concepts. BEM can be implemented in many different ways on any programming language. Of course, we have our own, and as we love JavaScript, it's JS based. Why did you develop BEM?# Initially, we faced a few problems: Avoiding copy/paste on different projects with the same design style guide Keeping large projects maintainable Having unified structure on different projects to make it familiar to developers So BEM was started to solve these problems but eventually became much more powerful. For all the steps of BEM evolution see The history of BEM. What next?# I'll continue to popularize BEM methodology and develop examples of implementations in different techs. Here's one of them for React: bem-react-core bem-react-components create-bem-react-app There's also a video from FullStackConf where we talked about all the features of BEM. Everything about interaction design in large teams is also important for me. As the department of search interfaces development continues to grow, we want to get benefits from the fact that so many cool people gathered in one place. It's a pity though such insights are hard to open source. What does the future look like for BEM and web development in general? Can you see any particular trends?# I'm sure that the component approach will continue to evolve. For example, nowadays in React the same ideas we used in our code several years ago are implemented. I hope that other concepts of the BEM methodology will be more widely known because ultimately it will make it easier to do web interfaces and it is beneficial to all of us as users. What advice would you give to programmers getting into web development?# Be always open to learning something new. On the other hand, do not reinvent the wheel but improve it — do not hurry to create your solution, take your time to find existing once and improve. Conclusion# Thanks for the interview Sergey! Conventions have power! To learn more about BEM, visit bem.info.

SurviveJS - Summary of 2017

It was quite a year for me. You could say a life-changing one even. I visited at least ten countries in Europe, and most of them were new acquaintances to me. I traveled more in one year than in my entire life before. I made more friends in one year than the years before. I found a new place to live. I began learning a new language, German, and I am starting to get the hang of it. Publishing# As if that wasn't enough, I also published a new paper book, SurviveJS - Webpack, and I am progressing on a new one about maintenance with the assistance of Artem Sapegin. I know I have to revise the entire React book but that has to happen after the maintenance one is out of the oven. This blog grew by about sixty posts, most of which were interviews. It is merely humbling to see the amount of variety in the community. If you know good topics to cover in interviews, get in touch. The site received well needed technical tweaks as a part of its technical debt was paid away. There's still some work left, but now it's faster and easier to perform the needed improvements to serve the community better. Public Appearances# I gave multiple public appearances across Europe in various meetups and conferences. It all started by coincidence as ReactiveConf invited me to tour with them about webpack. It was during this trip when I discovered Vienna and its welcoming community. I spent a life-changing Summer there and it still keeps changing as I am shaping my new life in the city. One of the more interesting sessions for me personally was the one about how I grew bootstrapped this little business for myself. You can find the slides online. It's a topic I would like to revisit at a better time. Personal development itself is something I should study in greater detail. Business# Business-wise the year wasn't as good one as the first one but the advances on the personal side more than made up for that. I think I've finally found a business model that makes sense to me. I realized that I should do a mix of consulting/training and writing. I can use the consulting income to allow me to write while writing allows me to get those consulting clients. What's missing is stronger integration between these two. While doing my webpack workshops, I realized it's convenient to build them on top of the book. I need to be more intentional about this, though and modularize all the content so that it works for multiple purposes like this. The webpack book and the maintenance book are quite close to this goal. The point is that done right; this would allow me to provide an online course offering to support the model. My most immediate goal is to get the maintenance book out there so people find it. Content-wise it's beginning to look good, but I require more feedback to push it further. I have some idea how to improve it, but feedback allows me and Artem to go faster. Even if the business goes wrong, it's not like I will run out of things to do. There's a fantastic amount of content to develop and refine. This work alone will keep me busy for months assuming there's no other work to be done. Finnish Code Ambassador of 2017# One of the highlights of the year was the fact that I was chosen as the Finnish Code Ambassador of 2017. It was my first major award, and I feel a large part of that belongs to the community that allows me to work this way. React Finland - 24-26.4.2018, Helsinki# I began to organize React Finland, the first major React conference in Finland, with a group of friends. It started as a joke but became something quite serious fast. So far organizing it has been a definite challenge, and I've been picking up a lot of new skills while using older ones gained during the past few years. I have a feeling the event will be one of the highlights of the next year for me. Conclusion# 2017 was a memorable year in many ways. It was the most intense and tiring year I've gone through. I went far beyond my comfort zone, but I suppose that's necessary if you want to progress in life. The year opened a lot of new possibilities, and although there are challenges ahead, I have a good feeling about 2018.

controllerim - MobX Inspired State Management for React - Interview with Nir Yosef

When you are writing applications, eventually you have to decide how to manage state. You can get far with React setState and lift the state in the component hierarchy as you go. Eventually that might become cumbersome and you realize using a state manager might save time and effort. This is the reason why solutions like Redux, MobX, and Cerebral are popular in the community. To provide another point of view, you will hear this time from Nir Yosef, the author of controllerim. It's a solution that builds on top of MobX and has been designed testability in mind. Can you tell a bit about yourself?# My name is Nir, and I am a front-end developer at Wix.com, with over two years of experience in React and MobX, and now gaining some experience with React Native and Android. How would you describe controllerim to someone who has never heard of it?# Controllerim is a state management library. It gives you the ability to create logic controllers for you React components, and makes your components automatically reactive to any change in the controllers. All of this is done with almost zero boilerplate. How does controllerim work?# Controllerim uses MobX Observables behind the scenes, so all the optimizations of MobX in term of performance are also relevant for Controllerim. How does controllerim differ from other solutions (like Redux and MobX)?# Controllerim brings back the idea of the well know Controller, the C of MVC, and abandon the singleton Stores concept that Redux (using Flux terminology) gave birth to. Why did you develop controllerim?# When I first came across React, I almost immediately came across Redux. Its seems like Redux was the only way to do React. Everyone was talking about it, so I decided to give it a try. After reading some tutorials, I was quite amazed by its complexity. All the different terms (thunk, reducers, selectors, map dispatch to props, etc.) weren’t so clear to me, and it seems like a considerable amount of boilerplate. Something just felt wrong. It seems like a strange way to implement the good old MVC. I think the article by André Staltz says it all. After some playing around with dummy project, trying to crack this Redux thing, I came across MobX and dumped Redux for good. MobX was much clearer and straightforward. I used MobX for over a year with my team, and it was pretty good, but some problems immediately came up: MobX Observables are not vanilla JavaScript objects. They are full of other junk, and we soon started to insert mobx.toJs() conversions all over the place. MobX doesn’t tell you how to structure your code, so we took the concept of singletons stores from Redux. Very soon we started to wonder how we should pass the stores around, how we should test components? Should we mock all the stores? Who needs to clean the stores when a component enters the screen? We tried to use mobx.inject and mobx.provide but those didn't play well with our tests. So MobX wasn’t perfect after all. At this point, I again started to wonder what happens to the good old MVC, Why things are getting so much more complicated on the web? And then I decided to write down all the pain points of our current architecture: We have to get rid of the toJS thing. I want everything to be a plain JavaScript object. We have to get rid of the singletons stores, and we must bind the stores life cycle to the components' life cycle. We must find a way to share data from one store to another, but I wanted to make it strict- it will be only possible to fetch data from stores that are higher in the hierarchy chain of the app, while AppStore will be the root. Everything MUST be testable. After writing it down, I found out that I don’t have a Store anymore. I have a Controller. The good old Controller. I knew I was on the right track. The API was just written itself down. I just needed to figure out the way to make it happen, and it wasn’t so hard. The final result was Controllerim. If you wonder about the name, I tried to name it “Controllers” but it was already taken. I tried React-controllers, but it was also taken. In Hebrew, the ‘im’ suffix is the plural suffix for the word controller, so I just named it Controllerim. :) So how does controllerim look?# Let's say we have App component as the root of our web app, and that we have Child component deeply nested in the app. Every data that we will put on the AppController will be available to all other components in the app for as long as the app is alive, so let's create an AppController and put some application data on it: class AppController extends Controller { constructor(componentInstance) { super(componentInstance); this.state = { userName: "Bob" }; } getUserName() { return this.state.userName; } setUserName(name) { this.state.userName = name; } } So a controller is just an ES2015 class that extends Controller and has some state and getters and setters methods. Now let's connect the controller to the App component: class App extends React.Component { componentWillMount() { this.controller = new AppController(this); } render() { return ( <div> <h1>Welcome {this.controller.getUserName()}!</h1> <compA /> <compB /> </div> ); } } export default observer(app); Easy right? We just need to init the controller in componentWillMount, and we need to make sure that we wrap the component with observer, and that's it! Every change in the controller will be reflected by the view. Now, let's say that Child is some deeply nested component and that it should allow us to preview and edit the userName when we click on a save button: Let's start with creating ChildController: class ChildController extends Controller { constructor(componentInstance) { super(componentInstance); this.state = { input: "" }; } getInput() { return this.state.input; } setInput(value) { this.state.input = value; } saveInput() { this.getParentController("AppController").setUserName( this.state.input ); } } The only new thing here is the call to getParentController(). Controllerim allows you to get any Parent controller, not only a direct parent, so we just save the userName, and because everything is reactive, this change will be reflected in all the views that make use of userName prop from App. Let's finish by creating Child: class Child extends React.Component { componentWillMount() { this.controller = new ChildController(this); } render() { return ( <div> <input value={this.controller.getInput()} onChange={(e) => this.controller.setInput(e.target.value) } /> <button onClick={() => this.controller.saveInput()}> Save </button> </div> ); } } And that's it! Simple isn't it? What do people think about Controllerim?# It depends. The ones that are already familiar with MobX are very supportive. The Redux people are more suspicious and begin to recycle arguments they heard about MobX, so I think it would be nice to tackle down the two most frequently recycled arguments once and for all: It’s magic, and we don’t like magic: Controllerim is NOT magic. Controllerim works just like React native components’ state - when you touch a setter on the controller, Controllerim triggers a force update of your component. So where does MobX enter the picture? Controllerim utilizes MobX to make better updates decisions. Thanks to MobX, Instead of re-rendering on every setter, Controllerim will trigger a re-render only when needed. But what if you need some data to be accessed from everywhere? You have to use singletons: No, you don’t. If you need some data to be available for all the components in your app, then this data is application data, just put it in your AppController (The root controller of your app), and it will be available to all other components for as long as your app lives. It looks just like React’s state, so why not just using it: Controllerim looks like React’s native state by design. The problem with the native state is that it’s hard to share between different components and it’s awkward to test. Controllerim solves those problems, and it even gives you a more comfortable way to manipulate the state: instead of this.setState({some: {nested:{prop: true }}}), you can just write this.state.some.nested.prop = true. What next?# Use Controllerim all over the place to make it battle tested. :) What does the future look like for controllerim and web development in general? Can you see any particular trends?# I think that Controllerim has the potential to be the best Redux alternative out there. In general, I think that React is here to stay, and the next giant step will be in the field of CSS. What advice would you give to programmers getting into web development?# If something doesn't feel right, don’t be fooled by its popularity. Who should I interview next?# You should interview someone from the CSS community. This field in the web development needs a little push. Conclusion# Thanks for the interview Nir! Controllerim looks like a great abstraction over MobX and I hope people find it. The code feels almost amazingly simple. Learn more about Controllerim on GitHub.

React Finland - Your Chance to Learn React Up North - Interview with Juho Vepsäläinen

There are a lot of React conferences these days. React has become one of the most popular web technologies during the past few years so this is understandable. Given I, Juho Vepsäläinen, am one of the organizers of React Finland (24-26.4.2018, Helsinki), I thought it would be a good idea to provide an inside view on the event. Can you tell a bit about yourself?# These days I consult companies ranging from small startups to big enterprises like eBay or Kapsch. I can provide perspective on how to improve their current workflow, especially on the technical side. I train people as needed. This process, in turn, helps me to develop the book offerings you can find on this site. The writing brings in the consulting clients and so far it has been working fine. How would you describe React Finland to someone who has never heard of it?# React Finland brings together developers from both east and west. It is held late April (24-26.4) 2018 and contains a wide range of topics related to React. It is perhaps the northernmost React conference in the world and a perfect excuse to visit Finland, the most boring country in the world. What does React Finland offer?# We settled on a three-day program early on. To keep the difficulty manageable, we decided to go with one day for workshops and then two days for presentations in a single track format. The schedule has ample amount of time per presenter and allows flexibility so we can have panels and lightning talks as a part of the days. We have a wide range of speakers on topics related to React. Most of the topics are technical, and we cover ideas from state management, styling, testing, React Native, React VR, and also upcoming technologies such as Reason. I feel we have a good program that can serve people with various amounts of React experience. Especially the workshop day should be exciting. The state management workshop by Michel Weststrate is a real masterclass as it takes the whole day. Rest of the sessions are up to four hours, and you will have time to participate in two sessions depending on your interest. We split the workshop profit with the speakers as we know organizing and coming up with the material is hard work. How does React Finland differ from other events?# Given the conference is held in late April, the weather isn't our selling point. It's still Spring and chilly, but that's not the point. Finland is one of those countries most people know but have never visited. The idea is to provide an excellent excuse to visit this boring country in the north so you have stories to tell and can confirm the country indeed exists. You will learn something about sauna, sisu, and salmiakki. I am happy with the program and the speakers we managed to attract. I feel both the local and international audience will be able to get a lot out of the event. It can become a meeting point between the east and the west thanks to the location that's relatively easy to reach from both directions. Finland has always been between the east and the west given it's a buffer country by its past. Why did you decide to arrange React Finland?# It was around August of this year (2017) that we joked about organizing a React conference in Finland at Koodiklinikka, the most popular development Slack of Finland. As it happens, the joke is becoming a reality. It didn't take long for me to realize that a conference would be feasible especially given there hasn't been an international React conference in Finland, and there's definite demand for one. As a result, we set up an association (harder than it sounds), set up a team, used our contacts to reach out to speakers, and developed website and technology required. It has taken a lot of effort so far, and the fact that this is volunteer based makes it a notch harder. Finding the time and motivation to do even boring tasks is the hardest part, but it's required as, without a certain amount of work, there can be no conference. React Conference is a conference from developers to developers. Organizing a conference this way comes with different pressures than a commercial one. There are always particular struggles you have to go through, but so far we've managed well. For me, this was a chance to learn from conferences I had been to and try to avoid the mistakes they have made. I feel the most significant thing we can do better is to serve our speakers better by connecting them with the local community and generating business to them. It's only fair to reward them as they are one of the critical parts that make the event work. What next?# We recently announced that ticket sales will go live on 27th of December. That is your chance to get an early bird ticket at an affordable price (250€ for two days, 150€ for a half-day workshop). Our goal is to sell close 300 tickets. Most likely a majority of them will be sold to local developers, but we welcome international audience as well. What advice would you give to programmers getting into web development?# Be prepared to learn and change your mind a lot. Keep an eye on the hype. You don't have to be the first always. Focus on delivering value to your business, and the rest will follow. Who should I interview next?# I know who I'm going to interview next, but I'll keep that as a secret. Any last remarks?# A lot of technical development has gone into the conference. Check out the site repository, the content repository, the GraphQL API, and the mobile app for example. Organizing this conference taught me how to develop small conference sites effectively, and I found a nice model for doing this. That might be worth a blog post of its own. Conclusion# I have a feeling React Finland will be a good conference. That said, it's important we'll attract the right people there, and this is where you come in! Going to the event might be one of the better excuses to visit Finland. You can learn more about the event at its site. Subscribe to the mailing list or follow @ReactFinland on Twitter to stay in the loop.

redux-saga-test-plan - Test Redux Saga with an easy plan - Interview with Jeremy Fairbank

Redux Saga is famous for being easy to test but what if it could be even more comfortable. redux-saga-test-plan by Jeremy Fairbank was designed precisely for this purpose. Can you tell a bit about yourself?# Test Double. We believe that software is broken, and we're here to fix it. Our mission is to improve how the world builds software. I've been doing front-end development for almost ten years now and enjoy the paradigms that React and Redux helped introduce to the front-end world. I've created a few open source projects that work well with the React and Redux ecosystem such as revalidate, redux-saga-router, and, the topic of this interview, redux-saga-test-plan. I'm a huge fan of functional programming and Elm. In fact, I'm currently writing a book on Elm with The Pragmatic Programmers called Programming Elm: Build Safe and Maintainable Front-End Applications. The book is over halfway complete and should be available sometime in Spring 2018. How would you describe redux-saga-test-plan to someone who has never heard of it?# redux-saga-test-plan is a library for easily testing redux-saga. If you're unfamiliar with redux-saga, check out the redux-saga interview with creator Yassine Elouafi. redux-saga-test-plan removes the headache of manually testing saga generator functions that couple your tests to their implementations. It offers a declarative, chainable API for testing that your saga yields certain effects without worrying about other effects or the order effects were yielded. It also runs your saga with redux-saga's runtime so that you can write integration tests, or you can use redux-saga-test-plan's built-in effect mocking to write unit tests too. How does redux-saga-test-plan work?# Let's look at some example sagas to see how redux-saga-test-plan makes it easy to test them. Simple API Saga# Given this simple saga for fetching an array of users: import { call, put } from "redux-saga/effects"; function* fetchUsersSaga(api) { const users = yield call(api.getUsers); yield put({ type: "FETCH_USERS_SUCCESS", payload: users }); } You can test it with redux-saga-test-plan like this: import { expectSaga } from "redux-saga-test-plan"; it("fetches users", () => { const users = ["Jeremy", "Tucker"]; const api = { getUsers: () => users, }; return expectSaga(fetchUsersSaga, api) .put({ type: "FETCH_USERS_SUCCESS", payload: users }) .run(); }); The expectSaga function accepts a saga as an argument as well as any additional arguments for the saga itself. Here, we pass in the fetchUsersSaga and inject a mock api to fake the API response. expectSaga returns a chainable API with lots of useful methods. The put method is an assertion that the saga will eventually yield a put effect with the given FETCH_USERS_SUCCESS action. The run method starts the saga. redux-saga-test-plan uses redux-saga's runSaga function to run the saga like it would be run in your application. expectSaga tracks any effects your saga yields, so you can assert them like we do with put here. Sagas are inherently asynchronous, so redux-saga-test-plan returns a promise from the run method. You need that promise to know when the test is complete. In this example, we're using Jest so that we can return the promise directly to it. Because redux-saga-test-plan runs asynchronously, it times out your saga after a set amount of time. You can configure the timeout length. Built-in Mocking# If you don't inject dependencies like the api object, you can use expectSaga's built-in mocking mechanism called providers. Let's say you import api from another file and use it like this instead: import { call, put } from "redux-saga/effects"; import api from "./api"; function* fetchUsersSaga() { const users = yield call(api.getUsers); yield put({ type: "FETCH_USERS_SUCCESS", payload: users }); } You can mock it with the provide method like this: import { expectSaga } from "redux-saga-test-plan"; import api from "./api"; it("fetches users", () => { const users = ["Jeremy", "Tucker"]; return expectSaga(fetchUsersSaga) .provide([[call(api.getUsers), users]]) .put({ type: "FETCH_USERS_SUCCESS", payload: users }) .run(); }); The provide method takes an array of matcher-value pairs. Each matcher-value pair is an array with an effect to match and a fake value to return. redux-saga-test-plan will intercept effects that match and return the fake value instead of letting redux-saga handle the effect. In this example, we match any call effects to api.getUsers and return a fake array of users instead. Dispatching Effects and Forked Sagas# redux-saga-test-plan can handle more complex saga relationships like this: import { call, put, takeLatest } from "redux-saga/effects"; import api from "./api"; function* fetchUserSaga(action) { const id = action.payload; const user = yield call(api.getUser, id); yield put({ type: "FETCH_USER_SUCCESS", payload: user }); } function* watchFetchUserSaga() { yield takeLatest("FETCH_USER_REQUEST", fetchUserSaga); } In this example, watchFetchUserSaga uses takeLatest to handle the latest FETCH_USER_REQUEST action. If something dispatches FETCH_USER_REQUEST, then redux-saga forks fetchUserSaga to handle the action and fetch a user by id from the action's payload. You can test these sagas with redux-saga-test-plan like this: import { expectSaga } from "redux-saga-test-plan"; import api from "./api"; it("fetches a user", () => { const id = 42; const user = { id, name: "Jeremy" }; return expectSaga(watchFetchUserSaga) .provide([[call(api.getUser, id), user]]) .put({ type: "FETCH_USER_SUCCESS", payload: user }) .dispatch({ type: "FETCH_USER_REQUEST", payload: id }) .silentRun(); }); redux-saga-test-plan captures effects from forked sagas too. Notice that we call expectSaga with watchFetchUserSaga but still test the behavior of fetchUserSaga with the put assertion. We use the dispatch method to dispatch a FETCH_USER_REQUEST action with a payload id of 42 to watchFetchUserSaga. redux-saga then forks and runs fetchUserSaga. takeLatest runs in a loop so that redux-saga-test-plan will time out the saga with a warning message. You can safely silence the warning with the alternative silentRun method since we expect a timeout here. Error Handling# You can use providers to test your saga's error handling too. Take this new version of fetchUsersSaga that uses a try-catch block: function* fetchUsersSaga() { try { const users = yield call(api.getUsers); yield put({ type: "FETCH_USERS_SUCCESS", payload: users }); } catch (e) { yield put({ type: "FETCH_USERS_FAIL", payload: e }); } } You can import throwError from redux-saga-test-plan/providers to simulate an error in the provide method: import { expectSaga } from "redux-saga-test-plan"; import { throwError } from "redux-saga-test-plan/providers"; it("handles errors", () => { const error = new Error("Whoops"); return expectSaga(fetchUsersSaga) .provide([[call(api.getUsers), throwError(error)]]) .put({ type: "FETCH_USERS_FAIL", payload: error }) .run(); }); Redux State# You can also test your Redux reducers alongside your sagas. Take this reducer for updating the array of users in the store state: const INITIAL_STATE = { users: [] }; function reducer(state = INITIAL_STATE, action) { switch (action.type) { case "FETCH_USERS_SUCCESS": return { ...state, users: action.payload }; default: return state; } } You can use the withReducer method to hook up your reducer and then assert the final state with hasFinalState: import { expectSaga } from "redux-saga-test-plan"; it("fetches the users into the store state", () => { const users = ["Jeremy", "Tucker"]; return expectSaga(fetchUsersSaga) .withReducer(reducer) .provide([[call(api.getUsers), users]]) .hasFinalState({ users }) .run(); }); Available Effect Assertions# Here are the other effect assertions available for testing. take(pattern) take.maybe(pattern) put(action) put.resolve(action) call(fn, ...args) call([context, fn], ...args) apply(context, fn, args) cps(fn, ...args) cps([context, fn], ...args) fork(fn, ...args) fork([context, fn], ...args) spawn(fn, ...args) spawn([context, fn], ...args) join(task) select(selector, ...args) actionChannel(pattern, [buffer]) race(effects) Other Features# Snapshot testing Partial assertions Negated assertions Assert a saga's return value How does redux-saga-test-plan differ from other solutions?# Only test the effects you're interested in with expectSaga. You don't have to manually iterate through your saga's yielded effects, which decouples your test from the implementation. A declarative, chainable API with less setup for testing sagas. Other options that I've seen use imperative APIs with more setup steps and only let you test certain effects. One of the few saga testing libraries that lets you also test your Redux reducers. Test forked sagas many layers deep. Built-in mocking with static and dynamic providers. Negated assertions. You can test that your saga did not yield a particular effect. Partial assertions. For example, you can test that your saga put a particular type of action without worrying about the action payload. Why did you develop redux-saga-test-plan?# I grew tired of manually testing sagas by iterating through yielded effects like this: function* fetchUsersSaga() { const users = yield call(api.getUsers); yield put({ type: "FETCH_USERS_SUCCESS", payload: users }); } it("fetches users", () => { const users = ["Jeremy", "Tucker"]; const iter = fetchUsersSaga(); expect(iter.next().value).toEqual(call(api.getUsers)); expect(iter.next(users).value).toEqual( put({ type: "FETCH_USERS_SUCCESS", payload: users }) ); }); These tests took long to write and coupled the test to the implementation. One small change in the order of effects would break a test even if the change didn't change the saga's overall behavior. Ironically, I created a testSaga API that took some of that boilerplate away but still coupled tests to their implementation. I finally set out to create a more user-friendly API that removed most of the boilerplate and let you focus on testing the behavior you were most interested, and this is how expectSaga was born. What next?# Writing my Elm book is currently consuming a lot of my time, so I've had to take a short break from redux-saga-test-plan. However, the next big plan is to support redux-saga v1, which adds support for effect middlewares. Effect middlewares let you intercept effects to return a mock value. I hope to simplify expectSaga's implementation of providers with effect middlewares. There's a nice backlog of issues for other cool features like new helpful assertions and integrating with a full Redux store too. Contributors are welcome! What does the future look like for redux-saga-test-plan and web development in general? Can you see any particular trends?# I'm not entirely sure because it depends on the life of redux-saga. Mateusz Burzyński and all the contributors have been doing a great job maintaining it. It's a great sign that they're working toward v1. But front-end development can move and change so fast. For example, we've seen a massive rise in the popularity of RxJS and redux-observable. As long as there is broad support for redux-saga in front-end applications, I think redux-saga-test-plan will stick around and fill a much-needed testing niche. Testing saga generators is hard, so redux-saga-test-plan will hopefully continue to make it easy. That being said, I don't always get to use redux-saga with my client projects, so I could use the support of other contributors to make redux-saga-test-plan the best it can be for testing. As far as trends, I think front-end development is heading toward better maintainability and safety with static typing. Elm, TypeScript, and Flow are making it easier to build robust front-end applications. Static types can catch so many simple bugs and mistakes to help you refactor code more confidently. What advice would you give to programmers getting into web development?# You don't need to keep up with every new library and framework coming out. Focus on a stack that you like and build fantastic software. Don't let others make you feel like you're not a real developer because you're not up-to-date with the latest JavaScript framework. What's most important is understanding the language you're working with and how to stick to good software engineering practices. Find a mentor that's empathetic and eager to help you. Also, ask to speak at a meetup or submit to a conference. You'd be surprised how many people sometimes aren't experts on the topics they share (I've been there for sure). You can share the pain points you experienced learning a technology and offer your unique perspective on what you love about it. Then, you can inspire and empower other newcomers. Who should I interview next?# I might be a little biased because I work for Test Double, but you should interview Justin Searls. He speaks a lot about testing, and his insight is something the JavaScript world would greatly benefit from. He maintains our awesome test double library testdouble.js, which has transformed how I think about mocking in tests. Conclusion# Thanks for the interview Jeremy! redux-saga-test-plan seems to complement redux-saga well. You can learn more from the redux-saga-test-plan site and redux-saga-test-plan GitHub page.

Redux Form - The best way to manage your form state in Redux - Interview with Erik Rasmussen

Forms are a frequent topic in web development as we saw in the earlier interview about a-plus-forms. This time around, I'm interviewing Erik Rasmussen about a popular option, Redux Form. Erik has published a library agnostic successor to Redux Form. See Final Form to learn more. Can you tell a bit about yourself?# I began using React immediately after it was open sourced in 2013, building side projects and ran into all of the state management problems that Flux was introduced to solve. I was active on the Reactiflux Slack channel as Redux was taking shape before its announcement in 2015, back when what is now called reducers were still called stores. How would you describe Redux Form to someone who has never heard of it?# Web forms have a lot of state involved with them. It might seem like all you have to keep track of is the value of each field, but there is so much more. For example: Which field currently has focus? Are all the fields valid? Which fields have errors? Are we currently submitting the form? Are we currently doing some sort of async validation as the user is filling out the form? Which fields has the user visited (focused on)? Which fields has the user touched (focused on and then left)? Redux Form manages all of that state for you, providing each field with what it needs to render: its value and onChange, onBlur, onFocus, etc. props. How does Redux Form work?# React prefers unidirectional data flow where a container component holds state and passes down the state and callbacks for its children to modify the state. Redux fits this model like a glove, keeping state globally and allowing mutations through dispatched actions. Redux Form dispatches actions for every event in your form, and updates the global state accordingly, rerendering only the components that need to be rerendered. How does Redux Form differ from other solutions?# The most significant difference is that it uses Redux. Some other solutions also use Redux, but many do not. Like everything in engineering, this has its pros and cons. The main two benefits are: you can watch all of your state mutations go by in Redux Dev Tools and that you can listen to Redux Form actions in other reducers of your application, e.g., potentially updating some canonical local record when your form submission has succeeded. The primary drawback is that you might not be using Redux at all in your application, but to use Redux Form as your form solution, you will be forced to use it. However, Redux is so prevalent in the React community, the chance that you are already using it to manage state is pretty good. Why did you develop Redux Form?# Well, I was building an app that had several long forms. I asked Dan Abramov in the Reactiflux Slack channel, "Redux isn't fast enough so that I could dispatch an action on every single keypress in a form, right?" He responded something along the lines of, "I don't see why not? Try it!" And Redux Form was born. I had published a few tiny niche libraries before but had never been The Maintainer of an open source project. The community was very supportive, and I worked hard with them to sculpt Redux Form into what it is today. It has been a lot of work, but also fun and rewarding. What next?# Taking into account all that I have learned in maintaining Redux Form, I have recently created and released what I think might be the next generation of form state management. The solution does not depend on Redux or even React. It is a library that could potentially also be used by our brethren in the Angular, Ember, Preact, and Vue communities. The library is called 🏁 Final Form, and it's based on the Observer pattern, where different form elements on the page subscribe to different parts of the form state, and only update themselves when they need to. I would encourage your readers to check it out. What does the future look like for Redux Form and web development in general? Can you see any particular trends?# The npm download charts for React, Redux, and Redux Form look very similar: GROWTH. 📈 According to the npm download stats for October 2017, 46% of projects using React are using react-redux, and 24% of those are using Redux Form. That's 11.2% of React projects that are using Redux Form. There are 1.6 million projects on npm that depend on Redux Form. Redux Form is here to stay. As for web development in general, I think the declarative "UI as a function of State" paradigm that React has popularized is here to stay. From what I can tell, most of the frontline battles being fought today are attempting to drive a stake into the heart of CSS once and for all. Web Components and WebAssembly seem like promising future tech but aren't worth learning yet unless you lust for the bleeding edge. What advice would you give to programmers getting into web development?# As the author of react-redux-universal-hot-example, one of the more popular early React, Redux, Webpack, Hot Reloading, Server-side Rendering boilerplate repositories, I have a pretty solid understanding of the immense learning curve just to get a React app off the ground. Luckily, it's not 2015 anymore, and now we have projects like Create React App and Next.js that make it orders of magnitude easier to get started with React. I'm also old enough to remember the barbaric days of programming without StackOverflow, but now it's exceedingly rare that programmers, even expert ones, run into a problem that someone has not already asked about, and gotten an answer for, on StackOverflow. You just have to build something and ask questions when you run into problems, which you will. But the thrill of solving them and getting your thing to work, even if it's just a silly spinning "Hello World" text, is the fire that keeps us all going. Who should I interview next?# I think the person I'd most like to see gain exposure from a site like this is Eric Berry, the creator of Code Sponsor, he's valiantly attempting to do the impossible: make open source sustainable and avoid developer burnout. How much money would companies have to invest to update their code base if the sole developer of a popular OSS library were to quit and walk away? Thousands upon thousands of dollars worldwide. And how much are they paying to use these libraries? Zero. Donate buttons aren't worth the pixels they're rendered with. There are some efforts, like OpenCollective, which are beginning to address this problem, but it's still a huge problem. It never occurred to me that having such a popular library could be monetized through tasteful, subtle ads on the documentation pages. How many hours a day do we coders spend looking at documentation pages? And how valuable are our eyes to get ads in front of? If you have a product that plugs into the production stack at any place or even a product that you want to advertise to people with healthy salaries, library documentation is a great place to advertise. Anyone with an open source library with even a few dozen monthly downloads should look into CodeSponsor. $3/month > $0/month. Conclusion# Thanks for the interview Erik! If you are using Redux, it's hard to avoid using Redux Form. It's so handy. To learn more, check out Redux Form site and Redux Form GitHub page.

Redux Zero - Single Store, No Reducers - Interview with Matheus Lima

Although using Redux is straight-forward once you understand the approach and its nuances, after a while it gets repetitive. It's easy to end up with a lot of "boilerplate" code that wires all the logic together. For this reason, multiple solutions addressing the issue have appeared. In this interview, we'll learn about Redux Zero by Matheus Lima. See also the Kea interview for another approach and the original Redux interview to learn more about the approach from its creator. Can you tell a bit about yourself?# Matheus Lima I am Matheus Lima, a JavaScript lead developer at Concrete Solutions. How would you describe Redux Zero to someone who has never heard of it?# Redux Zero is a library which offers a simple way to handle state in modern applications. It's lightweight, easy to learn and already works with React, React Native, Preact and Svelte. We have plans to add Angular and Vue.js bindings as well. How does Redux Zero work?# It's simple. First, create a store. The application state will live here: import { createStore } from "redux-zero"; const initialState = { count: 1 }; const store = createStore(initialState); export default store; Then, create some actions to change the state of your store: const actions = (store) => ({ increment: (state) => ({ count: state.count + 1 }), decrement: (state) => ({ count: state.count - 1 }), }); Since the actions are bound to the store, they are just pure functions. Now create your component. With Redux Zero your component can focus 100% on the UI and just call the actions to update the state: import React from "react"; import { connect } from "redux-zero/react"; import actions from "./actions"; const mapToProps = ({ count }) => ({ count }); export default connect( mapToProps, actions )(({ count, increment, decrement }) => ( <div> <h1>{count}</h1> <div> <button onClick={decrement}>decrement</button> <button onClick={increment}>increment</button> </div> </div> )); Last but not least, plug the whole thing in your index file: import React from "react"; import { render } from "react-dom"; import { Provider } from "redux-zero/react"; import store from "./store"; import Counter from "./Counter"; const App = () => ( <Provider store={store}> <Counter /> </Provider> ); render(<App />, document.getElementById("root")); How does Redux Zero differ from other solutions?# Redux is great, but in some cases, it's way too much. Maybe you don’t want to add all of that boilerplate to your project. Or perhaps the learning curve is too steep, and you just want something simpler to work with. Redux Zero, on the other hand, is very simple. You don't have to learn about dispatchers and reducers (that's why the name is Redux Zero - because there are zero reducers). With Redux Zero you just have a store and some actions. Why did you develop Redux Zero?# One of our developers here at Concrete, Miguel Albernaz, was using this gist as a state management solution instead of Redux. The project was going so well that I decided to extract the code, modify it a little bit and open source it to give back to the community. What I did not expect was this huge success in less than a month. What next?# Right now we have three things in mind: Improve the documentation Add a middleware Add Angular and Vue.js bindings (we need your help). What does the future look like for Redux Zero and web development in general? Can you see any particular trends?# This is a really hard question. Everything is moving so fast in web development that's hard to make predictions. That said, I think that web components and state management tools are here to stay. What advice would you give to programmers getting into web development?# Study the basics. React, and Angular are probably going to die, but JavaScript and CSS won't. Who should I interview next?# Jason Miller. Any last remarks?# Try to be kind to open source maintainers. Most of them are not getting paid to develop the tools that you're using for free. Conclusion# Thanks for the interview Matheus! Redux Zero is one of the lightest state management solutions I've seen so far. Check out Redux Zero in GitHub or learn more in an introduction to Redux Zero.

Flow Runtime - A runtime type system for JavaScript with full Flow compatibility - Interview with Charles Pick

As discussed in the maintenance book, typing your code can be valuable in many ways. In part, it's about communication. Having the type information available makes it easier to develop tools that make it easier to manipulate code (think refactoring, intelligent parameters). To understand the topic more in-depth, this time around I'm interviewing Charles Pick, the author of Flow Runtime. Can you tell a bit about yourself?# codemix, I live in the countryside with my wife and kids near York, UK. My first exposure to programming was with BASIC on the BBC Micro at school when I was seven, ever since then I've been hooked. I worked as a nightclub DJ before becoming a full-time web developer about twelve years ago. Since 2013 I've been entirely focused on JavaScript, and I love it. I'm interested in how to make JavaScript faster, safer, less error-prone and more comfortable to refactor. How would you describe Flow Runtime to someone who has never heard of it?# It's a type system for JavaScript that works while the application is running, not at compile time like TypeScript or Flow do. The core idea is that types become first-class values that you can reference and pass around like any other. Flow Runtime can represent the type of any possible JavaScript value; numbers, objects, classes, functions, etc. and verifies that the input your program receives in reality matches what you were expecting when you wrote it. The goal is to be 100% compatible with Flow - Flow catches errors at compile time, Flow Runtime catches errors when your code interacts with untyped code or user input. How does Flow Runtime work?# There are two main packages: flow-runtime flow-runtime represents types and does the actual verification. It provides a simple, composable API for defining types and matching values against them: import t from "flow-runtime"; const stringOrNumber = t.union(t.string(), t.number()); stringOrNumber.assert(123); stringOrNumber.assert("this is fine"); stringOrNumber.assert(false); // throws an error You can use this standalone and as well as type checking it enables some pretty cool stuff, like pattern matching. babel-plugin-flow-runtime babel-plugin-flow-runtime takes code written with Flow annotations and turns those annotations into flow-runtime API calls. So when you write code like this: type Thing = { id: number; name: string; }; const widget: Thing = { id: 123, name: "Widget" }; the plugin produces this: import t from "flow-runtime"; const Thing = t.type("Thing", t.object( t.property("id", t.number()), t.property("name", t.string()) )); const widget = Thing.assert({ id: 123, name: "Widget" }); You can try this out in the online demo. How does Flow Runtime differ from other solutions?# The vast majority of JS validation libraries have a focus on validating user input of one kind or another, whereas Flow Runtime is all about program correctness. To do this, we have to be able to represent the type of any possible JavaScript value, e.g., the shape of a class, or whether a generator function yields the right type of object. Most popular validation libraries don't handle these kinds of scenarios, the closest alternative is tcomb by Giulio Canti, it's a vast library but pre-dates Flow and therefore can't handle some complicated cases. Why did you develop Flow Runtime?# We were modernizing a pretty large, sprawling JavaScript codebase for one of our customers back in 2014 when Facebook launched Flow, and after a bit of experimentation we were sold entirely - it's an excellent technology. However, at the time it was still pretty rough around the edges and didn't support a lot of the newer ES6 features we were using. We also found introducing a type system to an existing project pretty challenging. You have to make a lot of assumptions about the untyped code, and you don't start seeing the benefit until the overwhelming majority of the codebase is converted. The core problem is that your nice, newly typed codebase touches untyped code so often that static analysis is defeated - it's entirely possible to write fully annotated code that Flow happily accepts and is completely wrong because the real-world input does not match your expectations. So if we can't find these problems at compile time, the only way to find them is at runtime. Out of this idea came my first effort - babel-plugin-typecheck which compiles Flow type annotations into type checks. It generates all the code inline which makes it very hard to develop for and maintain. As Flow matured and continued getting better, it became clear that we needed a different approach if we were ever going to be compatible, and so flow-runtime was born. What next?# I'd like to produce a webpack plugin to make it easier to work with external type definitions. Right now you have to use a separate package called flow-runtime-cli which generates a file that you can later import, and it's all a bit messy. I also want to simplify some of the internals to make it easier for people to contribute. What does the future look like for Flow Runtime and web development in general? Can you see any particular trends?# In general, I think we're going to see TypeScript and Flow become more and more popular, the benefits of optional static typing are pretty clear at this point. I'd like to see the ecosystem around Flow mature, I think it's the technically superior option but TypeScript offers a lot better tooling at the moment. Eventually, I think we'll see Flow's type information start being incorporated into other projects, which will enable a lot of cool things. If that information were available directly to Babel, webpack or uglify, etc. it would be possible to generate much faster safely, smaller production builds. Now that Babel supports TypeScript it is possible to support TypeScript in flow-runtime. I'm pretty excited to try that out. What advice would you give to programmers getting into web development?# Take every prescriptive blog post or article you read with a pinch of salt and be particularly suspicious of anyone who tells you to always/never do X, Y or Z. Stick with well-established tools at first and don't worry about keeping up with the cutting edge - excellent documentation and support matter most. Seek out and work closely with people smarter and more experienced than you, but remember that those intelligent people are still going to be wrong a lot of the time. Comment your code, for your future benefit and because you'll spot a bunch of lurking bugs in the process. Who should I interview next?# I think Benjamin Gruenbaum is an unsung hero in the Open Source JavaScript community. Benjamin contributes to so many projects and discussions that it's hard to keep up, he's one of those people that is always there, helping people on Stack Overflow, supporting other developers in GitHub issues, being pragmatic and helping keep discussions productive. Conclusion# Thanks for the interview Charles! I think your work complements Flow well and will allow people using it already to get more out of the approach. Check out flow-runtime site to learn more. See the project in GitHub as well.

a-plus-forms - A+ forms. Would use again - Interview with Nikolay Nemshilov

If you think about it, a lot of web development has something to do with forms. Every time you capture information, you most likely require a form. It's one of the basic skills for a front-end developer. There are plenty of options for React and I've reviewed the directions briefly on my slides. To get a better idea of one of them, I'm interviewing Nikolay Nemshilov about A+ forms. I met Nikolay over the internet roughly a decade ago while I was writing my first bigger web application. I used his RightJS library there. It was sort of an alternative for jQuery at the time. It has been fun to see both of our careers evolve since those days. Can you tell a bit about yourself?# That's usually enough to start. But, I suppose you want something more tangible in this case. Well, I'm a software engineer, I think. And I've been doing this long enough to start feeling a bit awkward about it. I guess my "career" as a software engineer began when IE4 was the tip of the spear and I still had my hands on the keyboard every single day. Recently, however, I've been more focused on building teams of software engineers at my day job. I see this as just another way to create software. I suppose it's a natural outcome of attempts to realize more extensive and more significant projects. Ok, I admit, this was a bit vague. Don't get me wrong; I am not trying to dodge the question. But I feel like a personal story of a Siberian born, working-class nerd who lives in Australia is going to be a bit confusing and besides the point. How would you describe a-plus-forms to someone who has never heard of it?# A+ forms is a React forms library that helps you not cry yourself to sleep every time your boss asks you to build a twelve-field form. It solves tedious problems like state management, validation, and data transformation in a predictable manner with minimal configuration. How do a-plus-forms work?# I think this question can be answered from multiple perspectives: how it works internally, what it exposes externally, and how it works in the context of an engineering team. It primarily revolves around the concept of an input field. I started with the familiar idea of an HTML input tag with its name, value, and onchange attributes and then applied these to all fields. Fields may also have sub-fields. In some cases, a form is one large field. The big idea here is to work with the grain of engineers' understanding of forms. Engineers think of forms as a bucket of input fields that spits out a blob of data which we then retrieve and send to the server. A+ forms provide this the type of developer experience. For example: import { Form, TextInput, PasswordInput } from 'a-plus-forms'; cosnt sendToServer = ({ username, password }) => { /* ... */ }; <Form onSubmit={sendToServer}> <TextInput name="username" label="Username" /> <PasswordInput name="password" label="Password" /> <button type="submit">Sign In</button> </Form> The above is just a simple example that doesn't do justice to the level of complexity A+ forms can handle. But it demonstrates the principle behind the library: Here are my fields. Please give me the data entered into them, because I don't care about anything else at the moment. This mentality is shared by engineers and teams. It's a universal truth of forms if you will. All you want is data. How do a-plus-forms differ from other solutions?# Ok, let's get this straight. I'm not going to say anything negative about other solutions - I'm not here to bash other people's work. Besides, given enough determination, most problems can be solved with any tool available. Instead, I'll explain what's important to me. As the technology matures, we humans try to use it to solve increasingly complex problems. Which in turn requires increasingly sophisticated solutions. Over time this complexity starts accumulating until we forget what we were doing in the first place. Most solutions on the market address the complexity of the task with increased complexity. Over time this inevitably becomes taxing. A+ forms differ here by attempting to keeping the task of creating and maintaining complex forms simple. Why did you develop a-plus-forms?# To become rich and famous and achieve world domination, naturally. But seriously, I think I have little patience for wasting time in my work. I don't know about you, but I'm easily distracted and discouraged when things are not going smoothly. There are so many awesome things waiting to be built in the world, and spending time dealing with mundane problems that have already been solved is unproductive. That's the same principle why you use React. You could devote yourself to vanilla JavaScript and DOM. But after ten times of writing the same repetitive boilerplate code and dealing with browser inconsistencies, you probably just want to focus on building the actual app, not figuring out why change events are not triggered on a range input in IE 10. I built A+ forms for the same reason, so my engineers and I don't have to solve this problem over and over again and can focus on making what we want to develop. What next?# I'm glad you've asked. A+ forms itself represents just the data handling core. All the components are just standard HTML-looking abstractions, which depending on a context, can be implemented in all sorts of things. Those "all sorts of things" is the next step in my view. Here are the next extensions that I'm planning to build: 1) Bootstrap-tailored fields A+ forms have a bunch of standard fields out of the box, but they're not tied to any particular UI component implementation. I want to create an extension that will convert those fields into standard Bootstrap fields as a means to simplify adoption further. This has been done. See a-plus-forms-bootstrap. 2) React Native fields This one is my favorite. Form management in native mobile apps is alien to us web developers. But it doesn't have to be like this. If we re-implement those fields in React Native components, then engineers could have the same developer experience between web and native apps. Heck, they could even finally share their forms code between them. 3) HTML5 props validator At my day job, we're using JSON Schema as a way to validate forms, but it's a bit overkill for more straightforward cases. I want to build an extension that will read standard validation props like required and pattern on the input fields and set validation rules accordingly. The goal is to make A+ forms into a sort of "Barbie doll", where the community can build extensions and extra accessories for it and share their solutions with each other. What does the future look like for a-plus-forms and web development in general? Can you see any particular trends?# If you ask my opinion, I think we should stop calling it "web development" and instead just use the term "development". From first-hand experience since almost the beginning of widespread adoption of the web, I can say one thing: engineers tend to be overprotective of their reputation, to the point of being real jerks. When I started my career, the word "web developer" was an oxymoron. The older generation didn't even want to call us "developers", they called us "webmasters" as a way to distance themselves from us. They saw themselves as "real engineers", where we were just playing with toys. If you joined the bandwagon a bit later, you might have seen web developers belittled as not being "real programmers". Humans do nasty stuff to each other now and then. But, by now techniques developed for the "web" have become pretty much the standard practice in most areas of software. For example, the process for building UIs we've developed for the "web" beats the traditional "native" UI practices. The same goes for building APIs. Node.js based microservices, Serverless, load balancing, high-efficiency networking, and so on all grew out of the "web". The child has grown into an adult and feels strong. Now that adult just needs to learn how to act like an adult. That will be a trend in the near future. What advice would you give to programmers getting into web development?# Tread without fear, my friends. The "web" is here to stay. Don't listen to anyone who tells you it's not "real software engineering". Also, a bit of a downer: 99% of your time won't be about "writing clever algorithms". The sooner you accept that the better off you will be. It's just a fairy tale that has nothing to do with reality. It's called "development", not "slinging out code" for a reason. It's really about building things, not showing how smart you are. Because guess what, everyone else is just as bright :) This observation brings us to the third and last piece of advice. Learn how business works. I know, business, ewww! But it will help you to make better decisions and understand how other people see your role in a company. Most importantly, this will help you to keep hassle to a minimum and get back to doing what you love - creating things. Who should I interview next?# Ooh, I love this! Okay, so, anyone really from Thinkmill, Envato or Buildkite. They are all strong technically, and most of them are outstanding people. Any last remarks?# Don't forget to eat well, get enough sunlight, and, if you're an introvert, don't forget to give yourself plenty of downtime cuddling with a book to recharge your batteries. The world is an exhausting place, but it has pancakes in it. Conclusion# Thanks for the interview Nikolay! A+ forms looks like a solid form handling solution for React. You can learn more about A+ forms at GitHub.

“SurviveJS — Webpack” v2.1 and “SurviveJS — Maintenance” v0.9

What do you do when you realize a book has become too big? You split it of course. The webpack book began to feel this way after the previous release and this is the reason why I started to write a new book about maintenance with Artem Sapegin. Overview of the Situation# I've been collaborating with Artem since I wrote my first React book and he was the ideal collaborator for the new effort as we both have experience with maintaining and developing JavaScript projects of different sizes. Writing the book has been a chance for us to gather our knowledge into one place and learn in the process. Maintenance feels like an undervalued topic, and it's one of the main reasons why we decided to write the book in the first place. It's easy to start a project but how can you ensure its success? Normally a project spends most of its lifetime in maintenance mode so putting the focus on this topic has value. Book Improvements — “SurviveJS - Webpack” v2.1# We began writing the book by moving secondary topics to the maintenance book from the webpack one. This cleaned up the structure of the book and allowed me to make it easier to approach. The webpack book is more to the point now although it's still a long book (~370 pages). I am happy with the results, though, as now it feels like the book can be extended again. During this process, I've applied simplifications based on my training experiences this year. I updated the book to webpack 3 and added tons of small tips and tricks here and there. A few editorial tweaks have been made to ensure the book reads well and fits the PDF format nicely. I've listed the main changes below: The Packages part has been eliminated. The chapter focused on consuming packages remains in the book while the rest of the content has been moved to the maintenance book. The code has been formatted using Prettier. There are still trailing commas to keep the diffs simple. The Automatic Browser Refresh chapter has been renamed as webpack-dev-server to reflect its content better. The linting chapters have been rewritten and moved to the maintenance book. The Analyzing Build Statistics chapter has been renamed as Build Analysis chapter. The Bundling Libraries chapter has been reworked and moved to the maintenance book. The Library Output chapter has been dropped as webpack documentation and the maintenance book cover the topic well. The Customizing ESLint appendix has been moved to the maintenance book. The Hot Module Replacement with React appendix has been dropped as the official documentation covers the topic well. The CSS Modules portions have been moved to an appendix as it's secondary content. The book structure has been simplified and streamlined where possible so it's easier to get into the topic. At the same time I added more tips and tricks where it makes sense. I still have content planned for the webpack book, but even in its current state, it's better, and more focused, than the old one. If you have ideas on what specific topics to cover, let me know at GitHub. In total 309 commits went to the book since the last release. You can find the changes at GitHub. Remember to use the "Files changed" tab as it gives you a good overview of what's happening with the book. You can find the book below: “SurviveJS — Webpack” - Free online edition “SurviveJS — Webpack” - Leanpub edition (digital) A part of the income (around ~30%) goes to Tobias Koppers, the author of webpack. I support his work this way given mine builds on top of his. Literally, most of the income goes to webpack developers now! New Book - “SurviveJS — Maintenance” v0.9# The maintenance book has roughly 150 pages in its current state, and it covers topics including packaging, code quality, infrastructure, documentation, and future. It's a light, inspirational read and it contains plenty of techniques you can apply in your daily work. Given a large part of the content was split from the webpack book, the Leanpub edition of the maintenance book will be provided for free to those that bought the previous (v2.0) version of the webpack book or earlier. The current version of the book is missing some content, and the book is still shaping up. For this reason, it is important that you give feedback on the GitHub issue tracker. You can find the book below: “SurviveJS — Maintenance” - Free online edition “SurviveJS — Maintenance” - Leanpub edition (digital) The book profit is split between Artem and me. We use the funds to develop further content based on demand. What Next?# I want to push the maintenance book to a content complete state and produce a paperback version of it. The book price will go up gradually as it gets closer to completion. I have a set of tweaks planned for the webpack book, and there's a React book to update as well. Given I am based in Vienna these days, this has meant it's easy for me to do JavaScript training across Europe. I also consult occasionally so contact me if you are interested in either offering. Conclusion# I hope you enjoy the new book and find the webpack book improvements useful! It took a lot of work to get here, and there's still more to come. Thank you for your support! Both books have specific chat channels at Gitter if you want to discuss the topics directly: Maintenance book Gitter channel Webpack book Gitter channel You can also ask questions at my AmA. We will arrange a React conference in Finland (end of April, 2018). Perhaps I will see some of you there!

Cabbie - WebDriver for the masses - Interview with Forbes Lindesay

Testing is a lasting topic in software development. There are lots of tools, especially for JavaScript. In this interview, you'll learn about Cabbie, a WebDriver based browser automation library by Forbes Lindesay. Can you tell a bit about yourself?# Now I'm working on a startup called Changepage, which is a tool for sharing feature announcements and bug fixes. I'm also running training workshops on React and Node. How would you describe Cabbie to someone who has never heard of it?# Cabbie is a JavaScript library for automating browsers. The primary use case is end to end testing, but you can use it for any task that you would usually do by hand in a browser, and you want to automate. It lets you do all the things you would typically do by hand, but using JavaScript. How does Cabbie work?# Cabbie uses the WebDriver protocol to control browsers. It's a standard that all major browsers support that lets you interact with them via HTTP requests. There are two versions of Cabbie: cabbie-async is a Promise based async library. cabbie-sync is automatically generated from the same source code by removing all the await and async. It uses the spawnSync API in Node to make the same library, but synchronous. cabbie-sync is the synchronous version which is much easier to use. How does Cabbie differ from other solutions?# There are several different webdriver clients for Node. What differentiates Cabbie from most is that it has a synchronous mode, with the same API as the async mode. Normally when you're writing JavaScript, it's a bad idea to write synchronous IO, but for tests, it doesn't usually matter. Writing and debugging synchronous code is more comfortable. The async mode can also be useful though when you're trying to run many tests in parallel. If you use Cabbie in async mode, you can run multiple tests in parallel in a single Node process. If you use it in sync mode, you need multiple Node processes to run multiple tests in parallel. webdriver.io, an another solution, also has a synchronous mode, but it works a little differently. To use the synchronous mode, you have to use their entire test framework. Because Cabbie is just a library, you can use your choice of the test framework. It works equally well with Jest as it does with Mocha. Cabbie also has a real focus on developer experience. For example, if you use an online service like Sauce Labs or BrowserStack to run your end to end tests, you can configure Cabbie to use that service just by passing cabbie("saucelabs") or cabbie("browserstack") when constructing the driver. We also normalize the methods for selecting a specific browser across all the major cloud platforms (see Cabbie documentation on this) so you don't have as much to remember or as much to change if you switch providers. Why did you develop Cabbie?# I was developing a large web app, and we needed a way to check that everything worked when we put it together. Unit tests are great, but it's tough to keep the coverage high enough to catch every bug. With an end to end test, one test can cover a considerable portion of your app. It's also almost impossible to check that your frontend code and backend code works together without end to end testing. I tried webdriver.io and loved how they let you write synchronous end to end tests - it made things way more relaxed. At the time I needed it to work on Windows though because not all the developers I was working with were on apple. Once I dug into making it work on Windows, I found there were lots of other things I wanted to change and tweak the API. What next?# One of the difficulties in writing end to end tests can be the cryptic error messages you get back. The other thing is that it's straightforward to rely on your tests running quickly accidentally. What I'm starting to do with Cabbie is add detection for standard errors, and print more helpful error messages that provide suggested next actions for how to fix these problems. I'm also adding automated retries/timeouts to most of the methods as this makes it much easier to write stable, reliable tests. The next big project will be a test runner, similar to Jest but with features to make it easier to run tests in parallel across many browsers. What does the future look like for Cabbie and web development in general? Can you see any particular trends?# I see a lot of renewed interest in testing and static type checking. I think this is exciting. For the web to succeed, we need web apps to be reliable. I've seen tremendous benefits from TypeScript and Flow, and the competition is helping to improve both tools. Jest has transformed what we feel able to expect from testing frameworks. I think end to end testing is the next thing that needs a big kick in this area. The other big thing I think is improving is state management. We've just started to see the real problems that techniques like Flux and Redux cause, so the return to component-local state using this.setState and the upsurge of tools like GraphQL, Relay and Bicycle are changing things for the better. What advice would you give to programmers getting into web development?# I think if you're just starting out, the amount of stuff it seems like you need to learn about can be overwhelming. My advice would be to minimise what you learn from your first couple of apps. Just look for solutions to problems that you've experienced, and ignore the people saying you need to learn about this new technology or that new technology. The other piece of advice that I've found useful is try to deploy to production on day one of any new project. It's much easier to deploy an app that is just a blank "hello world" than it is a full complex application with databases and authentication and so on. If you are continuously deploying things to production, you are always ready to start promoting your idea, as soon as you're happy for people to start using it. Who should I interview next?# Erik Rasmussen created the Redux Form project. Handling form input well is a deceptively complex problem, and I think Erik has done an awe-inspiring job of understanding those issues and building a sound API for dealing with them. I'd also be interested to hear from Jared Palmer who's been doing similar work with the Formik project. Conclusion# Thanks for the interview Forbes! Cabbie looks like a fantastic alternative for end to end testing. You can learn more about Cabbie at its site. See also Cabbie on GitHub.

react-lite - Implementation of React optimized for small size - Interview with Jade

Even though React API is small, the implementation is quite sizable due to all the work it does behind the façade. For this reason, people have developed solutions that implement the API with different trade-offs. react-lite by Jade is one of these solutions. To learn about a related solution, read the Inferno interview. Can you tell a bit about yourself?# Jade My Chinese name is GuYingjie (古映杰), and people call me Jade in English. I live in Shanghai and work for Ctrip as a front-end architect. I am the author of react-lite. At Ctrip, we are big fans of React. We use React and React Native in many projects. My primary job is to improve the toolchain and infrastructure around React so that our engineers can develop a web app using React more productively and happily. I like being a part of the open source community. react-lite is one of my open source projects, and there are also other exciting projects in my GitHub, such as factor-network, which is two machine learning algorithms implemented in less than 400 lines code of JavaScript. It works well-playing flappy-bird and recognizing MNIST handwritten digit database. How would you describe react-lite to someone who has never heard of it?# react-lite is a subset of React - just like zepto to jquery. If your react app follows best practices of React, it's easy to use react-lite to replace React in a comfortable and safe way. Everything should just work and reduce your JS bundle size by 100 kB+. How does react-lite work?# People often ask me a question: How much code you had to drop from React source code to make react-lite so small? In fact, react-lite is not a fork of React repository. It's a re-implementation of the same React Public API using ES2015. It ignored the old browsers (such like IE8) to keep itself cleaner and smaller. We don't need to build a complex custom event-system as React does. We simply follow the W3C Event which has been implemented in all modern browsers natively. It also made React.PropType to be noop (empty function). It doesn't implement ReactDOM.renderToString and other React features which are not expected to run in production. I cherry-picked about 178 unit test suite from React GitHub repository (these are all about React Public API) to make sure react-lite can do the same thing. I created an independent repository(react-core-unit-testing to share the unit test suite. Anyone can use the test suite to implement their own react-lite or to check compatibility with official React. It will be great if React officially shares the Public API unit test suite in an independent repository one day. How does react-lite differ from other solutions?# Honestly speaking, react-lite is slower than inferno and bigger than preact. But, for now, react-lite may be more compatible. Both inferno-compat and preact-compat did not follow the same unit test suite of React Public API, and react-lite now has the best performance in react-core-unit-testing mentioned above. As we know, inferno and preact are not built for compat, they just have a compat version. It may be hard for them if their custom features cannot keep up the compatibility with React API, or their current implementation can't simulate the new features of React. For react-lite, that is not a problem as it doesn't contain any custom features and therefore can be refactored anytime if needed without breaking. Why did you develop react-lite?# In 2015/10, I saw some articles explaining how virtual-dom works. I thought I could do it better, so I created a repository named esnext-react, tried to implement a simple React using ES2015, and ran the react-motion demo successfully. I felt great when it worked. It's a very smooth animation written using the good old React API that we know of but running on esnext-react. In 2015/12, I shared the experience of esnext-react to some people in the Shanghai office of Strikingly. The audience, include Dafeng - the CTO of Strikingly, all think that making a smaller React runtime implementation is a worthwhile thing to do. It can help people who are hesitant to choose React on the mobile web due to the large script size. Then I renamed esnext-react to react-lite, and started to improve it and bring it into real projects in Ctrip. Now, react-lite is heavily adopted inside the company. What next?# Now I am focusing on Isomorphic Web App development. As a result, I have developed the following solutions: relite is a Redux-like library for managing state with a more straightforward API) for state management. create-app is meant to be configured once. It renders both client and server for a router and allows integrating them with Node.js, React, Isomorphic-fetch, js-cookie, querystring and other isomorphic libraries into react-imvc. react-imvc is similar to next.js as it helps people to build isomorphic/universal web app more easily. But react-imvc has a different idea, which I call Next generation of Front End MVC Architecture. The architecture comprises of React/React-lite as the View of MVC, redux-like/relite (state + actions) as the Model of MVC, and ES2015 class as the isomorphic Controller. All the parts of MVC are isomorphic by design. Our web app can do Server-Side-Rendering in Node.js (for SEO and faster initial screen load time) and do Client-Side-Rendering in the browser (for fast user interaction). Unfortunately, react-imvc documentation is written only in Chinese. I'm planning to translate it into English in the future. What does the future look like for react-lite and web development in general? Can you see any particular trends?# react-lite does not support React 16 yet because React Fiber is not stable enough. reducing the scripts size is also a plan of React Core Team. React 16 is already much smaller than React 15 is. Maybe it's not necessary to write a smaller runtime library of React anymore, or perhaps it's impossible to implement the react-fiber-architecture with less code than React has. So the future of react-lite is uncertain. It depended on the evolution of React. Anyway, react-lite is still an excellent choice for a mobile site that is following the best practices of React 15 and wants to reduce the bundle size of the js file. What advice would you give to programmers getting into web development?# Web development moves faster than you and me. No one can learn everything. But luckily, for most of the libraries or frameworks, we can learn it in a few days. Since there are too many things to learn, we must prioritize of our learning. For example, between ES2015/TypeScript and React/Vue/Angular, which to learn first? In my opinion, the answer is ES2015/TypeScript. The essential program language features have higher learning priority than libraries/frameworks written using the language. I also believe in learning by doing, learning by coding, learning by building, and learning by making. The source code of React is complicated, but the original idea of React is quite simple and elegant. Implementing your own React (or any other things you are learning) in an MVP (Minimum Viable Product) way can help us understand them more deeply and clearly, even if the code we had written will never run in production. Who should I interview next?# In China, there are many excellent front-end developers. I recommend some of them below: ZhengHaibo, author of regularjs, now works for Netease. HeShiJun, a evangelist of JavaScript/ECMAScript and Web Standard in China. yuanyan, author of rax, now works for Alibaba. linfeng, author of echarts, now works for Alibaba. chencheng, author of dva, now works for Alibaba. aui, author of art-template and artDialog. Any last remarks?# The language gap between Chinese developers and English developer will become smaller, and I am glad to see we can learn from each other more in the future. Conclusion# Thanks for the interview Jade! It was great that you dared to develop react-lite as a light replacement for React. We'll see how it goes with React 16. You can learn more about react-lite at GitHub.

React Day Berlin - Fully Packed Day of Your Favorite React Content - Interview with Robert Haritonov

There are a lot of React events out there these days and it seems a new one appears every week somewhere around the world. To continue on the theme, this time I'm interviewing Robert Haritonov of React Day Berlin organized early December. Read the interview about React Alicante to gain more perspective on conferences. Can you tell a bit about yourself?# After going through few stages of being an active speaker in Russian speaking community, open source maintainer of SourceJS, last years I've settled as a Tech Lead in Full-stack teams developing React and Node.js based applications. Next to my day job, starting from late 2015 I've started to actively be involved in local meetup organization, and build large international conferences in Europe. Which community events are you organizing now?# Together with colleagues and friends, I'm currently behind React Amsterdam, AmsterdamJS and React Open Source meetups and also conferences named by the same groups plus most recent - React Day Berlin event. How would you describe React Day Berlin to someone who has never heard of it?# Each event we do has its unique feeling and vibe. Whereas React Amsterdam is a massive event, for React Day Berlin we're building a more personal, cozy atmosphere with a right balance of talks touching various parts of the ecosystem. It's going to be an intense learning day for everybody into React, and a great place to network with other developers from Germany, as well as with international open source enthusiasts and great software engineers. What does React Day Berlin offer?# We offer a fully-packed line-up of talks about React (obviously), React Native, GraphQL/Apollo, case studies from world known projects and major open source initiatives like Storybook. Consider this as a mini-festival, coming to the center of Europe to celebrate all things React and concentrate the learning experience together with inspiration from your peers. Why did you decide to arrange React Day Berlin?# Doing side-projects, like in this case, event organization, it's important to challenge yourself with new ideas and formats. We've been looking for an option to fill the season start with a balanced conference format, and Berlin turned out to be a great place to host such event. It's a vibrant community, with lot's of great software developers and a big React fan base, that deserves a tremendous local conference with international vibes. What next?# Meanwhile, we've reached the capacity with some events and focusing on amplifying the best parts of our events and fixing bottlenecks of previously organized. We see lots of opportunities to build great communities, and that's what we love the most. What advice would you give to programmers getting into web development?# Try more things, stay open-minded and don't follow the hype everywhere it leads you. Choose what works best in your case, and set your way based on what works better for your product and team. JavaScript development is hard nowadays, but at the same time, the community provides a great variety of opportunities either you want to build a Web App, or a Desktop application with Electron. Who should I interview next?# It would be great to hear more from popular open source project maintainers. It's sad that open source is so hard our days, with people demanding too much from contributors doing their thing in free time for public availability. The more we hear from developers behind the projects we use, the better community will be able to understand the efforts people put into open source. Conclusion# Thanks for the interview Robert! I will be most likely participating in React Day Berlin myself. It's a good place for a conference like this. You can get a 15% discount to the conference ticket through this link.

unexpected-react - Test Full Virtual DOM - Interview with Dave Brotherstone

Testing React components is a constant topic. You can test through solutions like Jest or Enzyme. Or you could try something else like unexpected-react. The solution by Dave Brotherstone builds on top of another testing library, Unexpected. Read the interview with Sune Simonsen to understand the ideas behind Unexpected better. Can you tell a bit about yourself?# How would you describe unexpected-react to someone who has never heard of it?# If you want to write tests for your React components, you can use unexpected-react to validate that the components render what they should, and respond to events in the right way. It's based on JSX, so you assert that a component renders to a certain JSX template, and any differences are highlighted in a JSX diff. You can render using the shallow renderer, render to the DOM or render using the test renderer - the assertions stay the same. A simple example: expect( <MyButtonComponent />, 'to render as', <button>Click me</button> ); This example uses the React shallow renderer to render the MyButtonComponent and compares the output to <button>Click me</button>. If the output is different, you'll see something like the following as output: <button className="btn"> -Click me! +Click me </button> By default, it ignores extra props and extra child elements, so your test still passes as your component functionality expands (unless of course, you break something!). A more complex example: expect( <MyApp />, 'when deeply rendered', 'with event', 'change', { target: { value: 'foo' } }, 'on', <input />, 'with event', 'click', 'on', <button>Submit</button>, 'to contain', <LoadingSpinner /> ); This test renders the component to the DOM, triggers a change event with an argument on the input component, then clicks the <button> with the text Submit, and finally checks that the resulting render contains a component called LoadingSpinner. That last assertion highlights one of my favorite features of unexpected-react, which is that when you use the DOM renderer, you can assert on the full virtual DOM (the same tree you see in the React Developer Tools), with all the HTML elements and all your custom components. How does unexpected-react work?# It's a plugin for the unexpected assertion library, which is known for its great output and diffs. Most of the real work happens in a library called unexpected-htmllike which is a library that can perform diffing on any HTML-like structure. You give it the actual value and the expected value, and two adapters, which are simple objects that can read the name, attributes, and children of the actual and expected values respectively, and it returns a whether there were any differences and the diff of the tree in object form. This diff can then be passed back to another method in unexpected-htmllike which can output the diff in syntax highlighted JSX form. The diffing algorithm is, in fact, a bit more complicated than the React algorithm, as it optimizes for best output. For example, it uses heuristics to work out if an element is just a wrapper element and can be ignored. This property can be beneficial if you're testing components wrapped in (possibly multiple) layers of higher order components - unexpected-react will just see the higher order components as wrappers and gray them out in the output. unexpected-react itself is mostly just a set of assertions based on calling the diffing algorithm in various ways and presenting the output to the user. Doing this has the significant advantage that it can be adapted to new targets with minimal effort - I've recently released unexpected-preact for example, which has the same set of assertions for Preact How does unexpected-react differ from the other solutions?# The main advantages are the JSX based syntax, so there's no big API to learn, and excellent output if something doesn't match. For instance, the to contain assertion, if it doesn't find a match, it will show you the closest match so you can probably go straight to solving the issue (maybe just a single class was missing). I think this is a vast improvement over Enzyme, where you'd typically end up with an expected false to equal true output if the output wasn't found. When running Jest, it also supports snapshot tests, but unlike Jest's native snapshot tests, the diffs are based on real objects not just string representations of the JSX. Doing this means that if for example, a class is missing, the missing class will be named, rather than just highlighting the diffed line. If the classes appear in a different order, the test will still pass under unexpected-react as it understands classes, but fail under Jest. You can also snapshot out of the box using any of the renderers without any special add-ons. Why did you develop unexpected-react?# Back in 2015 the shallow renderer came out, and I was using it to write some tests, but asserting it was hard. You'd have to navigate your way through the children, and end up with assertions like expect(component.props.children[0].props.children[1].props.className).toEqual('foo'). I'd seen a lightning talk from Peter Müller as JSUnconf in Hamburg on unexpected and had started to play around with it. I was impressed with the output and began to use Peter's plugin unexpected-dom to assert properties on the DOM. One weekend I thought I might be able to adapt unexpected-dom to diff JSX trees, and so unexpected-react-shallow was born. unexpected-react came a bit later when I realized how I could access the full virtual DOM by hooking into the devtools hooks, and how to separate the logic of diffing a XML-like tree from the actual objects. What next?# I expect we'll add support for inferno soon. I'm also working on a bigger task to make unexpected-htmllike a bit smarter, so when it outputs diffs, it can skip sections of your render where there are no changes and only show the relevant differences. There are also some incredible things being worked on in the unexpected project - I don't want to say too much because they're very experimental at this stage, but I'm excited about the possibilities, especially when combined with unexpected-react. What does the future look like for unexpected-react and web development in general? Can you see any particular trends?# I think there's a bright future as it's the kind of project that once you've used it and got the output in your workflow, you can't ever go back to having to debug a test or open a browser to see where the problem lies. There's a great trend, which I think Angular started, that the view layer is testable and writing unit-tests for views is both achievable and useful. I believe that we'll see view level tests becoming more commonplace as they have for the other parts of applications. It wouldn't surprise me if there were some advances in browsers to support some fundamentals of the React model of just rendering from a given state, and let the platform perform the necessary mutations. For me, this is the game changer with React - it speeds development time, reduces bugs, and makes testing easy. What advice would you give to programmers getting into web development?# I'd say to learn JavaScript as a language - for me, it was a couple of good books and a whole lot of experimentation, and then go to meetups if they're available in your area. Don't make the mistake of thinking "I need to know more before I can go". I started going to an Angular meetup before I knew pretty much anything about modern web development, and I always managed to learn something or meet people that could answer my questions. Who should I interview next?# Lauren Macarthy from p5.js - I've not done much with the project but she's managed to create a great inclusive community, and I'd love to know more. Any last remarks?# If you're using Enzyme for testing your React components, you should take a quick look at my medium article comparing the tests and output in from the two libraries. Conclusion# Thanks for the interview Dave! unexpected-react looks like a step to the right direction and the API feels intuitive to me. To learn more, study unexpected-react site and unexpected-react on GitHub.

Experiences on WebExpo 2017

I was invited to WebExpo 2017 to discuss how I bootstrapped my business. Prague is one of the favorite cities of mine, so it was hard to say no. I'm happy I went there, and I picked up a few lessons while at it. The tenth anniversary of the event was full of content (four tracks!), and there were afterparties where you could meet people. Most of the attendees were local, and specific sessions were in Czech only. Tour of Prague# Opening ceremonies of WebExpo Like in React Next 2017, the organizers had something special in mind for the speakers. We spent time exploring the city and ended up having a nice lunch (self-paid). Prague cuisine is particularly good if you don't have a strict diet. If you do, then you might be in slight trouble, but you won't starve. The center of Prague is compact although there are several sights, such as the Petrin tower (think mini-Eiffel tower), outside of it. In addition to the center, we saw the main bridge and castle. At the subway with friends During the way back, I made the mistake of stamping my subway ticket from the wrong end (I prefer perpetual tickets). Of course, there was a check after our brief ride, but I got away with a warning. Lesson learned, stamp the right end! I hope they consider making an additional trip to the Karlštejn castle and the nearby mines in the years to come. I went there once, and it's one of the most nicer castles you can find in Europe. It's no wonder they film movies there. Visiting a Mucha museum I did some exploring of my own during the day I arrived in Prague. Given I have a keen interest in the art of Alphonse Mucha at the moment, I visited two museums featuring his art in the center of Prague. They were more focused on his commercial work, but I gained a few insights. Mucha mastered design and could use the line to his advantage in composition. You can also see his culture through his works. There's something Czech about them. I wonder if his style ever actually goes out of fashion. There's something timeless, and I felt a connection to Antoni Gaudí's work. I have yet to see the Slav Epic, Mucha's masterwork, in the National Gallery of Czech Republic. That trip alone would require several days. If you love museums, you won't run out of things to do in Prague. The Event# A conference speaker The event itself took three days. During the first two days there were up to four tracks to choose from, and occasionally there was a workshop running on the side. The third day was devoted to a single workshop, and I skipped it as it was added to the program after I had done my travel plans. The challenge of a conference like this is how to provide value for every attendee. Lack of singular focus means sometimes you may have to compromise. But it's also good as then you can get insights on topics you might otherwise miss. Overall, the quality of the presentations felt decent, and I confirmed a few of my hunches as a result. Particularly Joe MacLeod's, Mike Amundsen's, and the final session by Anton and Irene were worth it for me. Conference visitors I was expecting to see more people at the conference as it was marketed to have two thousand people. In the end, there was perhaps half of that, and the spaces were half-empty. I don't know if it has been this way during the earlier years and I'm not complaining as I prefer smaller amounts of people (a cultural thing). The conference space was split within a shopping mall. Although unorthodox, it worked quite well. I enjoyed the cinema in particular, and I was lucky enough to be able to give my presentation there. The main hall had too much echo for my tastes, and it made it slightly annoying to follow the presentations. I don't know if that's something the organizers could have fixed, though, and it might be my personal preference to have less echo. Despite these issues, I will likely revisit the event if it fits my schedule. Each day, and the day before the conference had an afterparty. I am not at my best at those, but it's still fun to meet new people and try to improve this weakness if nothing else. Prague seems to be the ideal place for these sort of things. Case SurviveJS - Bootstrapping a Personal Lifestyle Business# Conference bar My presentation, Case SurviveJS - Bootstrapping a Personal Lifestyle Business, was about how I bootstrapped my business and changed my life as a result. Although it was on the development track, it probably should have been on the business track instead. It was more of an inspirational talk rather than straight to the point "this is how to achieve the same technically". The problem is that there's no single right way and you have to learn your lessons. I felt I could use the available time quite effectively and there was time for a few questions in the end. I might have focused too much on the business aspects, and I would balance the talk differently now if I gave it again. Even in its current form, I think there's still some wisdom in it people might be able to use. I concluded the presentation with a simple quote: "Dare to dream, dare to try, and never give up too easily". Without dreams and willingness to push not a lot can be achieved. Sometimes you can nudge your life to the direction you want. Conclusion# It was nice for me to return to Prague. Even though I've seen the main sights of the city, it seems there's more to discover. I even found a secret bar no tourist knows. It seems like Prague has an underground world I have to discover. The event itself was worthwhile and WebExpo will return in 2018. If you want to Prague while enjoying a cross-cutting event like this, WebExpo is a good pick.

Kea - High level abstraction between React and Redux - Interview with Marius Andra

Redux took the React world by a storm when it was introduced. The simple idea provided a guideline for the community and "solved" state management for a lot of different kinds of applications. That said, Redux comes with a certain amount of wiring. For this reason, people have begun to explore abstractions on top of it to make it more comfortable to use without sacrificing the core benefits provided by the library. Kea by Marius Andra is one of these solutions. It provides a high-level abstraction between React and Redux. To learn more about Redux, read the interview of Dan Abramov. Can you tell a bit about yourself?# Apprentus, a private lessons marketplace which I co-founded. I sometimes write about life on my blog and about coding on Medium. I started programming in QBASIC at the ripe old age of 8 and have been hooked ever since. From BASIC I moved to C and C++ (for 2D and 3D game development), Perl (cgi-bin web development) and Java (when I had to build a client-server chat applet). In high school, I wrote a lot of PHP, in university a lot of Java/JSP. Eventually, I moved to Ruby, and it was my language of choice... until ES6 came out. During my PHP years, I wrote vanilla JavaScript (AJAX!). Later I went with Prototype and then jQuery. I completely skipped the Angular train. When Apprentus's jQuery spaghetti-code was no longer maintainable, I bet hard on Ember, rewriting most of the frontend in it. Unfortunately, I traded one set of problems for another... and frustrated a lot of mobile users with the 10sec load times. I'll spare you the rant! In November of 2015, after a month-long vacation in New Zealand, I started learning React as part of a freelance gig. That's where the story of Kea begins. How would you describe Kea to someone who has never heard of it?# Kea is an extremely smart mountain parrot from New Zealand. Kea is also an extremely smart abstraction between React, Redux, Redux-Saga and Reselect. You may think of it either as redux without the boilerplate or the ease of setState with the connectivity of Redux. In a nutshell, React handles your views, Kea handles your logic. How does Kea work?# Almost everything you do in Kea is done with the kea function. You use it to: Create new logic stores (the place where your logic and data live). Pull in data or actions from existing logic stores. Connect logic stores to your React components. Let's look at the simplest example: a counter that can be incremented and decremented with the push of a button. It's built in the "inline kea" style, where we create a logic store and immediately attach it to a React component. I'm using ES decorators here for extra smoothness, but you don't necessarily have to use them. I will assume you're familiar with the concepts in Redux. If not, please check out the interview with Dan Abramov for some much-needed context... although you'll surely understand the code without it: import React, { Component } from "react"; import PropTypes from "prop-types"; import { kea } from "kea"; @kea({ actions: () => ({ increment: (amount) => ({ amount }), decrement: (amount) => ({ amount }), }), reducers: ({ actions }) => ({ counter: [ 0, PropTypes.number, { [actions.increment]: (state, payload) => state + payload.amount, [actions.decrement]: (state, payload) => state - payload.amount, }, ], }), }) export default class Counter extends Component { render() { const { counter } = this.props; const { increment, decrement } = this.actions; return ( <div className="kea-counter"> <p>Count: {counter}</p> <button onClick={() => increment(1)}> Increment </button> <button onClick={() => decrement(1)}> Decrement </button> </div> ); } } It's all very Reduxy. You have actions and reducers. Both are pure functions. The code is very readable, and there's a clear separation of concerns. Compare this to a standard Redux-based approach: constants/counter.js export const INCREMENT = "INCREMENT"; export const DECREMENT = "DECREMENT"; actions/counter.js import { INCREMENT, DECREMENT } from "../constants/counter"; export function increment(amount = 1) { return { type: INCREMENT, payload: { amount: amount, }, }; } export function decrement(amount = 1) { return { type: DECREMENT, payload: { amount: amount, }, }; } reducers/counter.js import { INCREMENT, DECREMENT } from "../constants/counter"; export default function counter(state = 0, action) { switch (action.type) { case INCREMENT: return state + action.payload.amount; case DECREMENT: return state - action.payload.amount; default: return state; } } containers/counter.js import { connect } from "react-redux"; import { increment, decrement } from "../actions/counter"; import Counter from "../components/counter"; const mapStateToProps = (state) => { return { counter: state.counter, }; }; const mapDispatchToProps = (dispatch) => { return { increment: (amount) => { dispatch(increment(amount)); }, decrement: (amount) => { dispatch(decrement(amount)); }, }; }; export default connect( mapStateToProps, mapDispatchToProps )(Counter); components/counter.js import React, { Component } from "react"; export default class Counter extends Component { render() { const { counter, increment, decrement } = this.props; return ( <div className="kea-counter"> <p>Count: {counter}</p> <button onClick={() => increment(1)}> Increment </button> <button onClick={() => decrement(1)}> Decrement </button> </div> ); } } store.js // I'll spare you this part... As you can see, the amount of boilerplate you save is HUGE! No more mapStateToProps. No more export const INCREMENT = 'INCREMENT'. You just write code that matters while retaining the clear functional approach that makes Redux so powerful. Now, an example of this complexity can easily be written with React's own setState... but what if your specs change and you need access to this data from a different component? Move the state up and pass around a million props? That's not elegant enough for my taste. With Kea, assuming you also need to display the value of counter in your header, you would do as follows: logic.js import PropTypes from "prop-types"; import { kea } from "kea"; // no change to the code below export default kea({ actions: () => ({ increment: (amount) => ({ amount }), decrement: (amount) => ({ amount }), }), reducers: ({ actions }) => ({ counter: [ 0, PropTypes.number, { [actions.increment]: (state, payload) => state + payload.amount, [actions.decrement]: (state, payload) => state - payload.amount, }, ], }), }); index.js import React, { Component } from "react"; import { connect } from "kea"; import counterLogic from "./logic"; // pull in actions and props from logic stores @connect({ actions: [counterLogic, ["increment", "decrement"]], props: [counterLogic, ["counter"]], }) export default class Counter extends Component { // nothing changes here render() { const { counter } = this.props; const { increment, decrement } = this.actions; return ( <div className="kea-counter"> <p>Count: {counter}</p> <button onClick={() => increment(1)}> Increment </button> <button onClick={() => decrement(1)}> Decrement </button> </div> ); } } header.js import React, { Component } from "react"; import { connect } from "kea"; import counterLogic from "./logic"; @connect({ props: [counterLogic, ["counter"]], }) export default class Counter extends Component { render() { const { counter } = this.props; return ( <header> <strong>Kea is awesome!</strong> <span>Count: {counter}</span> </header> ); } } This magical @connect(options) helper is actually just a shorthand for @kea({ connect: options }). By replacing @connect with @kea, you can also define new actions and reducers while pulling in existing ones. Selectors and Side Effects in Kea# Kea has two other notable features: First, you may use selectors (through Reselect) to re-calculate values only when the input changes. Second, you may use sagas for side-effects. Please read the documentation for redux-saga to learn more. The Github API example is a good demonstration of both features: const githubLogic = kea({ actions: () => ({ setUsername: (username) => ({ username }), setRepositories: (repositories) => ({ repositories }), }), reducers: ({ actions }) => ({ username: [ "keajs", PropTypes.string, { [actions.setUsername]: (_, payload) => payload.username, }, ], repositories: [ [], PropTypes.array, { [actions.setUsername]: () => [], [actions.setRepositories]: (_, payload) => payload.repositories, }, ], }), selectors: ({ selectors }) => ({ // this will only be updated if "repositories" change. sortedRepositories: [ () => [selectors.repositories], (repositories) => repositories.sort( (a, b) => b.stargazers_count - a.stargazers_count ), PropTypes.array, ], }), // every time a "setUsername" action is called, // run the "fetchRepositories" worker takeLatest: ({ actions, workers }) => ({ [actions.setUsername]: workers.fetchRepositories, }), workers: { *fetchRepositories(action) { const { setRepositories } = this.actions; const { username } = action.payload; yield delay(100); // debounce for 100ms const url = `${API_URL}/users/${username}/repos?per_page=250`; const response = yield window.fetch(url); const json = yield response.json(); yield put(setRepositories(json)); }, }, }); Please dive into the Kea documentation to learn more! How does Kea differ from other solutions?# I wrote an article on Medium describing how Kea differs from Redux, MobX, DVA and other state management solutions. Please check it out for details! :) Why did you develop Kea?# In late 2015 I got a freelance gig, where my job was to code part of a fleet tracking solution. I was told to use React and Redux and given free reign over what other libraries I would use, how I would structure the code, etc. My employer tasked me with finding the best combination of React and Redux. The solution needed to be extremely verbose and maintainable since I would not stay on the project forever. So I started looking, reading, experimenting, rewriting and inventing. There was scarce documentation on how to structure a React and Redux application. Most guides recommended the actions/counter.js, constants/counter.js, reducers/counter.js, etc. approach. I knew from my Ember days that this is a disaster and strongly preferred a features-based approach (counter/actions.js, counter/constants.js, etc.). I tried and replaced many libraries, until I ended up with a combination of redux, reselect, redux-act and redux-saga... The resulting folder structure combined better ideas from the ducks approach and a scenes folder from the Redux without Profanity book. I wrote several helper functions to group actions, reducers and selectors into what I called "logic stores" and built glue to connect them to react components. I also wrote helpers that added sagas to the mix. Eventually, I released all of it under the name Kea with no fanfare. The developers who knew it were hooked and quickly adopted it in their projects, but nobody else knew about it. Since Kea turned out to be so useful for us, I decided to write documentation, develop tests and add features necessary for a proper release. And here we are! What next?# For Kea, the near-term goal is to develop an extension system, which would let one choose between sagas with redux-saga and epics with redux-observable for side effects or even let you use both at the same time (Issue #40). Of course, such a plugin system would open up other possibilities. The ultimate goal is to stabilize the API for a 1.0 release. For this, we need as many people to test things as possible. So please try it out and send feedback! :) For myself, once Kea hits a stable 1.0, I plan to shift my open source efforts to Insights, a "Desktop and Self-Hosted SQL-not-required data analytics and visualization tool". I have big plans for it but had to neglect it for a few months in favor of Kea. What does the future look like for Kea and web development in general? Can you see any particular trends?# I've been wrong before with my technological predictions (JSP and Makumba are the new Ruby on Rails! Ember 4 ever!), so I'm hesitant to make bold claims. That said, based on my experience moving from Ember to React and Redux, it felt like a whole new world opened before my eyes. Switching from an imperative paradigm to a functional one was counter-intuitive at first, but worth it. Who would have guessed that by limiting a number of operations I'm allowed to perform on my data, my code becomes simpler to read and mostly bug-free! Functional programming has been around for a very long time, but it was never as mainstream as it is now. When you get into functional programming, you become a better programmer, no matter the language or paradigm. React brought this to the masses. In my mind, this will be its greatest legacy. The difference between functional and imperative in frontend development is analogous to what Paul Graham said about Lisp: "During the years we worked on Viaweb, I read a lot of job descriptions. A new competitor seemed to emerge out of the woodwork every month or so. The first thing I would do, after checking to see if they had a live online demo, was look at their job listings. After a couple of years of this, I could tell which companies to worry about and which not to. The more of an IT flavor the job descriptions had, the less dangerous the company was. The safest kind were the ones that wanted Oracle experience. You never had to worry about those. You were also safe if they said they wanted C++ or Java developers. If they wanted Perl or Python programmers, that would be a bit frightening - that's starting to sound like a company where the technical side, at least, is run by real hackers. If I had ever seen a job posting looking for Lisp hackers, I would have been really worried." Writing frontend code with React and Kea feels like writing Lisp when all of your competitors are stuck with Java. I expect this trend towards functional programming to continue. What advice would you give to programmers getting into web development?# I would show them the example of Jennifer Dewalt. One summer morning, four years ago, I stumbled upon a post in HN titled "I’m learning to code by building 180 websites in 180 days. Today is day 115". It's an amazing story. The author started with no skills in web development and went on to build amazing examples all because she stuck with it and didn't give up. It's been demonstrated that the main difference between people who make it and people who don't is grit (read the book!). This is the willingness to push through no matter what. Combine grit with deliberate practice and compound effects and you'll be unstoppable! So my advice for you is this: learn a bit, but learn often. Make a plan that for every day for the next 30 days, you will read 15 min of any programming tutorial, listen to 15 min of the changelog or a watch 15 min of a good screencast. Once you're done for the day, draw a big red X in your calendar. Continue like this for 30 days and set up a system that makes it inevitable that you succeed. For example, tell a friend that every day in the next 30 days you skip your 15 min of coding, you'll pay them 50 €. You won't skip a day, and after 30 days, the habit of daily coding will be so ingrained, it will be hard to break. And you will have grown a lot! That's my advice. Who should I interview next?# Everyone I thought about has already been interviewed! :). Perhaps Jason Miller, the guy behind Preact? Any last remarks?# Please try out Kea and help make the React+Redux world a better place! Oh, and if you found any of this useful, don't forget to give kea a star on Github! It would mean a lot to me! Conclusion# Thanks for the interview Marius! Kea seems to hit a nice balance in API design. You can get the power of Redux without all the wiring. To learn more, head to Kea site, see also Kea on GitHub. Marius also wrote a comparison between Kea and alternatives.

Experiences on React Next 2017

I had the privilege to participate in React Next 2017 as an invited speaker. Participation gave me a good chance to learn more about Israel and also a good excuse to showcase webpack to a wider audience. I also gave a surprise talk about my site generator before the event and even sponsored it a bit. Surprise Meetup# We arranged a small surprise meetup before the event. In addition to my presentation about Antwar, the plan was to have Kyle present Gatsby. Sadly he had problems with his flight, and we ended up having only my presentation and a QA with another speaker in the end. I tried to keep the talk simple, but in the end, it was likely too technical for most people. If I ever get to present it again, I'll reduce the difficulty level a notch and show how to build an entire site using it. The main problem is that I didn't design it usability in mind. It's more about flexibility. Developing something like create-react-site on top of Antwar would likely alleviate this problem, but I don't want to maintain something like that as it could become famous. To be honest, I would look into alternatives like Gatsby and Phenomic over Antwar. I designed Antwar around my use cases, and there's no community around it. I prefer the current situation, though, as maintaining a popular tool becomes a chore soon. Tour of Israel# Jerusalem All the speakers were invited to a two-day tour of Israel. We spent time in the old city of Jerusalem first. After that, we headed to a "bedouin camp" where we rode camels and slept through the night. Unless you prefer excitement, don't ride the first camel. Sunrise at the desert Especially the sunset and sunrise were magical. I happened to wake up early enough to enjoy a short morning run. That was truly a unique experience and one of the highlights of the entire trip for me. Sometimes you find the best things outside of the official program. Masada We spent the second day of the tour exploring the fortress at Masada. Once you get there, you realize whey they built one there roughly two thousand years ago. The visit was again one of those memorable experiences that will remain with me. Masada After visiting the fortress, we headed to the Dead Sea. It was nice to see but to be honest, it wasn't my favorite place. That said, now I can say I've gone there and done that. Tel Aviv art museum I did touring of my own after the main event. Especially the art museum is a treat if you like modern architecture combined with nice exhibitions ranging from modern art to classics. The Event# React Next 2017 The event itself was professionally organized. There were roughly 700 people this year, and the atmosphere was amazing. The two tracks were filled with speakers from all around the world with primarily React related topics. Overall the quality of the talks was high although I skipped most. My talk, Webpack - The React Parts, was a bit of a challenge for me. I tried to condense as many little techniques within the thirty minutes I had, and I like to think I succeeded to some extent in this goal. My only gripe is that I ruined the last topic, Universal Apps, slightly due to time pressure. The live coding portion of the presentation went mostly smoothly although I had to npm install at a certain point. That's scary to do while on stage. I also managed to sneak in a couple of dry jokes in the presentation. I don't know how I do it. I try my best to be as unfunny as possible, yet this always happens. The presentation demo could be pushed further by moving the code to use Preact and using a CDN. There's also potential for smarter code splits. Conclusion# I think the trip was a success overall. I was given a chance to see a strange new country and perhaps make some new friends. I have a greater appreciation for a cool climate now at least, and I can see Israel in a more realistic light than before. Perhaps I will travel back there one day!

FrintJS - Build reactive applications with React and RxJS - Interview with Fahad Ibnay Heylaal

React gives a lot of freedom by default. You can choose which libraries to use to complement it. Freedom comes with responsibility, though. Now you are responsible for your decisions. FrintJS by Fahad Ibnay Heylaal and his company has developed a framework that brings certain opinions around React and helps to alleviate some of the problem. Can you tell a bit about yourself?# Travix. For the last few years, I have been focusing on JavaScript a lot. And never felt bored for a single day since! I enjoy being involved with Open Source activities. Not just the coding part, but all the other opportunities it brings along with it too. I feel a lot of good things have happened in my life because of the people I got to know through Open Source. And whenever I can, I try to contribute meaningfully back to the community that's giving us all so much on a regular basis. How would you describe Frint to someone who has never heard of it?# FrintJS is ultimately a collection of packages that help you build reactive applications in a scalable way. It is modular by nature and helps provide your application structure. If you look at our monorepo, everything is broken down into small packages. You use only the packages you need to build your application - either in the browser, server or CLI. How does Frint work?# FrintJS has this concept of Apps. Everything is contained within an App. And Apps can contain various things in the form of Providers, which is backed by a dependency injection system. There has to be a single Root App, and then there can be multiple Child Apps registering themselves to the Root one: FrintJS apps If they are meant for rendering, they can also pass options for targeting different Regions (areas where Apps are expected to be mounted) during registration. Code splitting is another primary thing we needed to tackle, and you can see here how different Apps can be loaded targeting different regions, which are coming from separate bundles: FrintJS regions It has a flexible dependency injection system, and rendering is entirely an optional thing. We use React, that's why we built frint-react too in our monorepo so that we can connect our Apps with React nicely. If someone wants to use a different library for rendering, they are completely free to do so. We tried hard not to lock ourselves in. In fact, I recently released frint-vue for Vue.js integration with Frint. If you are working with components, then FrintJS encourages you to keep the logic outside of your components as much as possible, and only pass the props as a stream to it, so the component is only responsible for rendering and nothing else. props-stream You can read a blog series about RxJS and React to learn more.. How does Frint differ from the other solutions?# It's hard to compare with any other solutions since FrintJS takes a simple and unique approach. It is not a full featured framework like AngularJS or EmberJS, but rather gives you a set of solid building blocks that you can grow your application up on. FrintJS provides tools that help you break your large applications into smaller apps, that you can assemble on demand. You can say that it differs mainly from other frameworks, by not locking itself into any specific rendering or templating library. And also by not targeting any specific platform: browser, server or CLI. It just works everywhere. Why did you develop Frint?# At Travix, we already have our front-end application built with React. We were one of the early adopters of React, and things have grown big over the years. There are multiple teams continuously working on the same repository, and it has resulted in a pretty large monolith over time. We realized we have a scaling issue. Regarding distributing the work to individual teams, and also performance-wise when it comes to bundling the whole application in our CI-server in one go. We did some proof-of-concepts for solving the various issues we had, and one of them ended up becoming FrintJS. Now that we have this concept of Apps, each team can maintain their app in their repository. And from the server's perspective, we can load only the apps we want (targeting URLs, etc.) and render them in the browser. There was also a need for control over what dependencies become available to all the teams. We want to ship as less code as possible to the browser, and we wanted to put a constraint on us by limiting ourselves to using FrintJS only. Besides Frint packages, we currently have a hard dependency on lodash, react, and rxjs only. This control also gives us an advantage over backwards compatibility, and we do take it seriously. Whenever we make changes, we move the removed features to frint-compat package, and they are supported with deprecated warnings for at least one quarter. Doing this gives our teams enough time to migrate. A lot of our particular problems have been solved by FrintJS for us, but we always made sure that we addressed them in as generic way as possible. So that it doesn't just help us in one single project, but in as many other projects by us and others outside of Travix too. What next?# It has been almost a year that we are running FrintJS in production. The project has evolved a lot based on our learnings from production experience. And it will continue to grow as we face new challenges. Since the release of v1.0 earlier this year, we consider the project to be stable enough, and now we are building new packages around the core of the framework as we need them. Besides that, we feel that it could do better if others find out about it too, and give it a go. That way, it will help us make the project even better with fresh new ideas. We always try to communicate what's coming next on our Roadmap publicly. What does the future look like for Frint and web development in general? Can you see any particular trends?# FrintJS is young, and so far it has mostly been guided forward by the production needs of Travix. But I feel as more people find out about it, it will continue to grow even stronger and build a community alongside the current group of contributors too. Then things will happen that may even surprise us, positively :) As for web development in general, it has never looked more exciting than now. And it just keeps on getting more and more exciting over time. Currently, React, and its ecosystem seems to be winning. And it has done an excellent job at advertising functional programming more positively to a wider audience. But it is the ecosystem around it that excites me even more. So many experiments are being done by everyone, that just pushes the norm even harder every day, forcing us to think differently. And that's just amazing. We are a big fan of reactive programming at Travix, and I feel RxJS could get a bit more boost from more influential developers in the community. We bet big on RxJS with FrintJS ourselves, and a lot of hard problems have become easier for us to solve once we started thinking reactively. And I think the next big shift we will see in web development is a majority of the developers adopting RxJS or similar libraries for doing reactive programming. What advice would you give to programmers getting into web development?# Web development has evolved a lot over the last 4-5 years. And it really can be overwhelming for anyone new going into it for the first time and trying to figure out what is happening at the moment. My first advice would be to stay patient. You don't have to learn everything in one go. There are so many things to learn. The best way is to find something to build and enjoy doing it. Figure out what you need to learn to build it along the way. It can be a blog, a todo list, or a two-column layout. It can be anything, no matter how big or small, as long as you enjoy building it. I have seen many suggest to newcomers to always focus on learning the basics first, then get into frameworks and advanced libraries to build stuff. While this is a pretty good advice, I think this can also bore away some newcomers. If you are learning from absolute zero, and want to feel and stay motivated, you would want to reward yourself quickly with visible results too. Otherwise, you may just stop with all your efforts. This is where I feel things like Yeoman, create-react-app, and online code editors like CodeSandbox, etc. are doing a great job at least in the JavaScript scene. These tools enable you to get started with advanced stuff, without having to spend too much time in just setting things up if you are only interested in trying things out. Newcomers would do better if they find out about these tools early on. One advice that I received myself from others is to find influential developers in the community, and follow them on Twitter, read their blogs, and see what they are up to. Doing this has worked wonders for me myself at least. It's a great way to feel inspired every day and stay motivated by keeping track of the cool stuff they are talking about. Who should I interview next?# I have been using alex, by wooorm in FrintJS for helping us write better documentation by catching insensitive words early in an automated way. He has been working on some other natural language processing tools too with JavaScript. I would be curious to know more about his latest work. Any last remarks?# I must mention and thank all the contributors who helped grow this project, and the teams working at Travix who are not on the GitHub contributors list directly but still kept providing valuable feedback to guide this project towards a better direction continuously. Because of those teams, FrintJS as an Open Source project had the good fortune of having production users from day one. If you have any questions or feedback, feel free to contact me directly, and am pleased to hear what you have to say about FrintJS. And many thanks to Juho for organizing this, and helping spread the word about FrintJS! Conclusion# Thanks for the interview Fahad! I hope people find FrintJS and perhaps even adopt it in their work. You can learn more at FrintJS site. See also FrintJS GitHub.

Neutrino - Create modern JavaScript applications with minimal configuration - Interview with Eli Perelman

Setting up a project can require a significant amount of effort if you want to control every single detail. This might be one reason why there are so many boilerplates out there as people tend to have different tastes. To make things easier, Eli Perelman has developed a solution on top of webpack. Can you tell a bit about yourself?# How would you describe Neutrino to someone who has never heard of it?# Neutrino is a tool that helps you build modern JavaScript applications without having to go through the initial configuration work of setting up webpack. You can install it along with a relevant preset and start writing an app or tool, but you can still customize your build process completely when the need arises. How does Neutrino work?# Neutrino utilizes webpack under the hood for building projects by augmenting it with knowledge about build middleware. Neutrino middleware are discrete pieces of webpack configuration that use a custom configuration API. You can compose many of these middleware together into custom presets, and each will modify the build accordingly. Take Neutrino's React preset as an example. This preset glues together several other pieces of Neutrino middleware that do things like perform Babel compilation, Hot Module Replacement, add loaders for many different file types, development servers, and minification, just to name a few. Each piece of middleware illuminates part of the possibilities inherent in a project based on the React preset. Additionally, anyone can augment the preset with their middleware, presets, and custom configuration to suit their tastes. Getting started with Neutrino is easy, using either Yarn or npm. As an example, here's a quickstart for a React project (using Yarn for brevity): yarn add react react-dom yarn add --dev neutrino neutrino-preset-react echo 'import React from "react";' >> src/index.js echo 'import { render } from "react-dom";' >> src/index.js echo 'render(<h1>hello world</h1>, document.getElementById("root"));' >> src/index.js yarn neutrino -- start --use neutrino-preset-react ✔ Development server running on: http://localhost:5000 ✔ Build completed Open a browser to localhost:5000, and you are ready to go! To make some of this possible, we had to create our webpack configuration API, called webpack-chain. As you may know, webpack exposes a low-level configuration format, but this format isn't well-suited for merging configuration deterministically across middleware, or even across many projects. With webpack-chain, we expose a chainable or fluent API for aggregating a webpack configuration which is much more deterministic. The above can be done by accessing the Neutrino API from a .neutrinorc.js file, which Neutrino can pick up automatically. You can also move the middleware Neutrino uses to this file to shorten up your command to neutrino start. // .neutrinorc.js module.exports = { use: [ ['neutrino-preset-react', { // Override the page title html: { title: 'Enterprise 2.0' }, // Override the Babel configuration for the React preset babel: { presets: [ ['babel-preset-env', { targets: { browsers: [ 'last 1 Chrome versions', 'last 1 Firefox versions' ] } }] ] } }], // Even completely override the webpack configuration // using the webpack-chain API (neutrino) => { neutrino.config .entry('vendor') .add('react') .add('react-dom'); } ] }; At this point, you can start your app using neutrino start or add a script to your package.json to be able to more easily run it from your command line. It's easy to add linting and testing to your project that also can consume Neutrino middleware and presets: yarn add --dev neutrino-preset-airbnb-base neutrino-preset-jest // .neutrinorc.js module.exports = { use: [ 'neutrino-preset-airbnb-base', ['neutrino-preset-react', { html: { title: 'Enterprise 2.0' } }], 'neutrino-preset-jest' ] }; When running neutrino start, you'll see linting errors, with no upfront configuration. If you create some tests, you can run those with neutrino test and they will be run using the same configuration, using the test middleware you have chosen. With neutrino build you can output compiled static assets for production deployment. For advanced cases, Neutrino even supports running custom commands that can consume the same configuration that webpack does. It has proven for us to be a pretty robust solution. How does Neutrino differ from other solutions?# When I compare Neutrino to other tools, I usually break them down into either boilerplates or CLI tools, and I would like to contrast them separately. First, while boilerplates are great for getting a project up and running quickly, over time, you may run into difficulties with the build system being tightly coupled to the application repository. As you make commits to your project, it is often hard or impossible to apply updates from the upstream boilerplate, which could include crucial bug fixes. Every new project you start means you need to copy the build configuration from existing projects, and eventually, this becomes tough to maintain. We experienced this pain first-hand with several Mozilla projects. Second, there are some fantastic projects out there like Create React App, preact-cli, nwb, and much more that avoid the boilerplate problem but at the expense of some other tradeoffs. Your configuration could be black-boxed and not able to be modified. They could force you to eject your configuration, making maintenance of the entire build dependency tree and configuration your responsibility again, and also preclude future configuration updates. Each separate type of project needs its build configuration, and therefore you may need to install a different CLI tool for several different types of frameworks. The configuration you make for one type of project may not be usable in a different kind of project, leaving you to maintain these separately. Not to mention, creating a common and shareable configuration for all of these projects is not possible. At Mozilla, we started down this route too and ran into these same problems. In the original implementation, I had created something very similar to Create React App, but found configuration very messy, and by putting dependencies like React behind our dependency, users couldn't update React without also updating our CLI tool. Neutrino attempts to address all of these problems with minimal tradeoffs. Couple this with the fact that Neutrino is project and target agnostic, i.e. you can build web apps, libraries, and even Node.js projects with it, you hopefully can see its potential to solve many headaches some of us have been fighting against. Why did you develop Neutrino?# As I alluded to, creating some of our front-end projects as Mozilla ran into some problems. We wanted to use a modern JavaScript feature set and build toolchain across many projects, but webpack didn't easily allow us to share this across those projects while still allowing individual configuration along the way. At least this was not achievable in a very deterministic way. Our first attempt at solving this ended up with a project that worked well enough for starting new React projects, but fell flat when integrating into existing projects, or if you wanted to keep its underlying dependencies up to date. We had tightly-coupled the application dependencies along with those of our build chain, which seemed OK at first, but led to upgrading issues later. Neutrino was born out of this mistake, and manifests one of our core tenets: don't mix application dependencies with those of the build system. What next?# webpack development moves quickly, and we want to keep up with that in Neutrino as well. We are moving to release v7 soon which will support webpack v3 and its improvements. We continue to refine our configuration API to make one-off changes and middleware easier for anyone to create and publish. I also want to work more with some of the framework authors and contributors out there to see if Neutrino can be a good fit for their users, and reduce the proliferation of boilerplates and CLI tools into something more universal and reusable for everyone. What does the future look like for Neutrino and web development in general? Can you see any particular trends?# I think web development is in a fascinating place right now. The Web is trying to compete with native applications, and this is a battle I believe the Web will eventually win. The rise of new JavaScript libraries and frameworks are pushing us into another period of discovery of what is and isn't possible for the Web right now. I believe the work being done on Progressive Web Apps is shedding light here, and clarifying that the platform in incomplete but getting better. I also see WebAssembly as the long-term future of web development. I can't predict the future of development, but I do believe that webpack and Neutrino are on the right course for a while to come. As long as developers want to use cutting edge features, integrated development workflows, and need a fully-featured build toolchain for the Web, I think webpack and Neutrino are well-suited to tackle these obstacles. What advice would you give to programmers getting into web development?# My spark in web development came when I found it fascinating that I could write some text and control my computer with it. If you have a passion for these technologies as well, build something. Anything. Tinker with the Web, with JavaScript, and see what you can create. Don't let the complexity get to you. Don't let the vast breadth of content get to you. These things will come with time. What's important is to get your feet wet and learn. I try and learn something new every day, and as long as you strive to continue learning, you can only grow from here. Who should I interview next?# I think Guillermo Rauch would be a great person to interview. His work with Zeit, Next.js, now, and past projects is epic. Any last remarks?# Thank you for taking the time to read through my opinions and comments. I appreciate everyone in our community and hope I can push the Web forward, while it pushes me too. Conclusion# Thanks for the interview Eli! I think a lot of people share the same pain of configuration as you and Mozilla and Neutrino definitely seems like one way to solve it. To learn more, check out Neutrino site and Neutrino GitHub page.

Idyll - Narratives for the web - Interview with Matthew Conlen

Since the early days of the web, people have wanted to visualize data to share with others. Even though the platform provides something basic for these purposes (i.e., tables, images), typically some amount of programming has been required. Idyll by Matthew Conlen is a tool designed to make data visualization in documents easier. Can you tell a bit about yourself?# Jeffrey Heer at the Interactive Data Lab at the University of Washington. Prior to grad school I worked on data visualization tools and interactive stories at FiveThirtyEight, helped the Freeman Lab build open-source tools for computational neuroscience, developed digital tools for journalists at The Huffington Post, and was the senior developer at Rhizome. How would you describe Idyll to someone who has never heard of it?# Idyll is a markup language for creating and publishing interactive narratives — think posts on websites like Distill, The Upshot, FiveThirtyEight, and The Pudding. With Idyll you write markup in a text file, which is then compiled into an interactive webpage. Idyll takes a lot of inspiration (and borrows a lot of syntax) from Markdown, while trying to extend these ideas beyond static text. One of the main points of Idyll is to make it very easy to embed JavaScript components inline with your text, and even have these components responsively update based on a reader's actions or scrolls. For example, a short Idyll file might look like this: # This is my title ### This is my subheading This is the main body of my article. Here is a scatter plot, rendered with JavaScript: [data name:"exampleData" source:"example-data.csv" /][chart type:"scatter" data:exampledata /] And here is some more text after the chart. In this example: the Markdown will be converted to HTML a dataset will be read from the example-data.csv file using the [data /] tag the [Chart /] component will call out to JavaScript to render the contents of the CSV in a scatter plot Idyll ships with a set of standard components that can be invoked in the markup. Because Idyll is built on top of React, any React component can be installed from npm and used without any additional configuration. It is also straightforward for users to write their own custom components. Everything in Idyll is reactive, so when anything changes the document automatically updates. For example, if we wanted to have a chart toggle between a scatter and a line plot, we could add a variable that changes when a button is pressed: [var name:"showScatter" value:true /] [Chart type:`showScatter ? "scatter" : "line"` data:exampleData /] [Button onClick:`showScatter = !showScatter`] Toggle Scatter [/Button] How does Idyll work?# As is typical with any programming language, Idyll starts with a compiler that does lexing and parsing of the input file. We rely heavily on existing open-source tools to help with this task, namely lex for the lexing and nearley to do the parsing. The compiler then spits out an abstract syntax tree (AST) that represents the hierarchy of elements that belong in the document. Once the AST is created, Idyll processes it to see which components are used and uses Browserify to create a JavaScript bundle that can be executed in a web browser. This JavaScript bundle includes a React component that will dynamically map the nodes in the AST to React components and render those components as its children. Part of this mapping process involves generating and executing some JavaScript code to make sure that Idyll's reactive variables work and the document is always re-rendered properly as those variables change. One cool thing is that because Idyll's compiler is written in JavaScript, we can execute this whole build process in the browser. How does Idyll differ from other solutions?# The typical process for creating interactive documents or explorable explanations involves hand-writing a lot of custom JavaScript and HTML. It can quickly become tedious balancing the narrative portion of the project with the nitty-gritty details of the code. To this end, the New York Times developed ArchieML, a markup language designed to make it easy to pull text into JavaScript code. A core idea with ArchieML is that code and text should be separated because they deal with very different concerns. Text needs to be edited for content and clarity, often by someone who doesn't care to look at the code. Developers will need to integrate that text with their code at some point but typically aren't concerned with grammar while they are writing JavaScript. In some ways, Idyll takes the opposite approach to ArchieML. Instead of making it easy to pull text into code, Idyll makes it easy to include JavaScript components in a text. With this approach, the relationship between code and text becomes much easier to reason about from an editorial perspective, and it becomes feasible to make nuanced changes to where components appear in the text, and how they interact with the page. In this way, the process of including an interactive component becomes much closer to, say, using a CMS to embed an image in a post. Another project that addresses combining code and data with text is Stencila. Stencila borrows ideas from the "code notebook" world and focuses on embedding executable code with text. My understanding is that the project is focused on reproducible research, whereas Idyll is focused on streamlining the connection between prose and JavaScript to build interactive narratives. There are lots of projects that make it easy to publish Markdown documents online (Jekyll, for example), but none of these allow JavaScript to be tightly integrated with the text. Why did you develop Idyll?# I developed Idyll as a way to automate away an entire class of hardships that authors face when they want to publish documents with interactivity. The project is a synthesis of a lot of ideas and lessons learned from developing these sorts of projects at FiveThirtyEight and elsewhere. Because Idyll has a fairly specific use-case, it can encapsulate some best practices. For example, it enables server-side rendering and other performance optimizations by default and allows the developer to avoid the headache of setting up a JavaScript build system and HTML templates. I believe that the web is a great platform for communication and we are only now starting to see some of the potential of moving beyond static text. With Idyll I hope to make it easier for other people to express their ideas in a dynamic and engaging way and publish them online. The project also has implications beyond just blog posts and news articles, such as providing a new entry point into the authorship of interactive textbooks. What next?# I'll continue to focus on Idyll's user friendliness and expanding the online examples that serve as a reference for new projects. I'm also interested in improving the authorship experience for interactive and data heavy websites in general, so you can expect to see continued work in this area specifically. Regarding new features for Idyll, one big item on the roadmap is enabling custom transformations that operate on the AST. Doing this would allow new possibilities such as writing components that call out to another program at compile-time to generate new static output, for example, calling graphviz to produce an image of a graph. We may also add some syntactic sugar to make certain common tasks even easier. In addition to that, we've been working hard to modularize the individual components of Idyll to make it easier for others to work with Idyll in their projects. What does the future look like for Idyll and web development in general? Can you see any particular trends?# It is an exciting time for web development. The number of powerful technologies at developers' disposal continues to increase. I would expect that the JavaScript developer's toolkit for building and deploying code will continue improving in sophistication and optimization. I'm optimistic that build tools will become easier to set up and use once more consensus has been established within the community around certain features. Thanks to the persistence of the open-source community, the amount, and quality of web tooling continue to rise. I'm excited to see further development and advancing maturity of these tools that empower creation. What advice would you give to programmers getting into web development?# Focus on solving a problem that is interesting to you instead of wading through tutorials. Learn to use Google to solve problems as they come up. Use things like budo to get up and running quickly and don't bother listening to people who argue about tools online. Don't use a framework until you can articulate why you need a framework. Who should I interview next?# There's lots of interesting work being done in the WebGL / 3D graphics space. Mikola Lysenko is doing great work with regl, and Ricky Reusser has been using it to make some excellent data visualizations. Stardust.js is also an exciting project for using WebGL to visualize data. The work on decentralization from folks working on projects like Beaker Browser and Dat is also exciting. Any last remarks?# The Idyll folks are usually hanging out in a chatroom on gitter. It's easy to join, and we're always happy when people say "Hi" or ask questions about the project. I'd also like to call out Ben Clinkinbeard for all the hard work he has done on the development of Idyll. Thanks for having me! ✨ Conclusion# Thanks for the interview Matthew! I love the fact how Idyll makes it easier for people from different domains to collaborate. Learn more about on their site. Check out Idyll GitHub page as well.

Motorcycle.js - A statically-typed, functional and reactive framework for modern browsers - Interview with Tylor Steinberger

Functional reactive programming allows us to think carefully about state and side effects. The question is, how to do that in JavaScript? Motorcycle.js by Tylor Steinberger is one solution to this problem. Can you tell a bit about yourself?# Besides my work on Motorcycle, I'm also a core Most.js contributor. My professional and open-source work have both been primarily focused on functional and reactive programming in TypeScript. Away from a keyboard, music festivals are my home away from home. I love to create, play, and experience music as much as I can. Traveling is a newly discovered interest of mine, and I'm trying to increase my ability to travel as much as possible in the future. To learn more about Most.js, read the Most.js interview. How would you describe Motorcycle.js to someone who has never heard of it?# Motorcycle is a type-safe functional and reactive framework for modern browsers. In large part, it is an architectural pattern for building highly interactive applications with Most.js. Given that the base is built with Most.js, Motorcycle is fully reactive. Being reactive solves many challenges such as handling events, errors, and any other asynchronous calls you may need to make. Motorcycle is functional. Large applications can be written using only functions which make them extremely testable. You'll never need to use the this keyword or make imperative function calls. Paired with a library like Ramda you may never have to see foo.bar or foo['bar'] notations again! Motorcycle is programmed as a function over time given it's most functional and reactive. How does Motorcycle.js work?# In order to better understand how Motorcycle works, it's important to understand what it achieves first. Motorcycle itself is just a single function named run. Using run requires two functions. We call these two functions Main and Effects. Motorcycle run diagram As you may be able to tell from the diagram above, the run function effectively operates like this: const sinks = Main(sources) const sources = Effects(sinks) Above may seem impossible to do at first glance, but is 100% possible! Using a Proxy we can solve this problem. // Create a Proxy which dynamically adds key-value entries // as they are accessed from Effects const proxySinks = new Proxy({}, { get(target, property) { if (!target[property]) { target[property] = createNewStream() } return target[property] } }) // Call Effects with our Proxy const sources = Effects(proxySinks) // Call Main to get our *real* Sinks const sinks = Main(sources) // Replicate the values from *real* Sinks to the proxySinks // This is what "closes" the loop to make a reactive cycle for (const sinkName in sinks) sinks[sinkName].subscribe(proxySinks[sinkName]) The above is an abbreviated version of how the real thing works, for those interested in how it works in practice, the source code can be found online. How does Motorcycle.js differ from other solutions?# Motorcycle tries to push the boundaries of how applications can be written. Countless hours have been spent to write the best TypeScript typings imaginable. In an editor like Visual Studio Code, you'll get autocompletion for everything, making spelling errors nearly impossible. When you query for a click event, Motorcycle already knows you're going to get a MouseEvent back, not just an Event. When you want to change the color of a font, you'll even get autocompletion for values like 'darkturquoise' and 'lightsteelblue'. I'm not aware of any other framework that has aimed to achieve this quality of developer experience. Why did you develop Motorcycle.js?# Motorcycle started as a sister project to Cycle.js. The original goal was to squeeze as much performance out of the ideas that André Staltz introduced in Cycle.js. At the time Cycle.js still made exclusive use of RxJS 4 and a virtual-dom, so first attempts to make Motorcycle as fast as possible involved integrating Most.js and snabbdom in an untouched version of Cycle. Only in the past year have I reduced my activity on Cycle.js and begun to focus more of my time elsewhere, Motorcycle taking on a slightly different API and semantics. In particular, Motorcycle now requires a browser that supports the ES2015 Proxy. The changes that have been made since dropping support for IE have opened many doors for us, especially for architecting large applications. What next?# Motorcycle has become stable over the past two years. The next venture in Motorcycle will be to upgrade to depend on @most/core which is a large improvement over the current version of Most.js. Doing this shouldn't take too long after a v1.0.0 release. Spoiler: I plan to rebrand Motorcycle.js to Motorcycle.ts to further emphasize our commitment to having the best TypeScript typings the language allows. We're also looking to improve documentation, split existing packages up into smaller pieces, and providing a great deal more testing utilities and commonly used functions. Mainly, we want to foster the community around the project and continue to grow. What does the future look like for Motorcycle.js and web development in general? Can you see any particular trends?# I think functional and reactive programming is creeping more and more into mainstream web development, and we'll be seeing more and more ideas exploring them. Motorcycle will continue to support this trend and bring these programming styles more into the limelight. What advice would you give to programmers getting into web development?# Being self-taught, all the advice I can give is to find a project that piques your curiosity, join their community, and work hard to contribute regardless of your skill level. I found Cycle.js in August 2015, joined the community, and just tried making things, almost none of which anyone uses today. The people active in those days would ask questions, do code reviews, and provide all kinds of tips to learn and grow. Ask questions, be curious, and never stop asking yourself and others how you can improve your skills at and away from a keyboard. Who should I interview next?# I think it'd be great to interview André Staltz. He's doing some awesome work to free people from large companies exploiting our privacy for profits. I interviewed André earlier about Cycle.js. Perhaps we can find another topic to cover! Any last remarks?# I'd like to thank everyone from the early Cycle.js days: André Staltz, Nick Johnstone Frederik Krautwald and Nathan Ridley. I'd also really like to thank the Most.js core team: Brian Cavalier and David Chase. Without these people and others along the way, I'd still work 50+ hours a week in a coffee shop. I can't thank them enough! Conclusion# Thanks for the interview Tylor! To learn more about the approach, study Motorcycle.js GitHub page.

Next.js - Framework for server-rendered React apps - Interview with Arunoda Susiripala

Building universal web applications combining server side rendering with front-end is popular these days. The approach is not without its problems, though. Now you have the extra challenge of managing code so that it works on the both sides. Due to the differences between them, you will run into a series of problems. next.js was developed to handle these concerns for React. To understand the approach, I'm interviewing Arunoda Susiripala this time. Can you tell a bit about yourself?# My work started to turn towards Meteor-related projects next, and I founded kadira.io, a performance monitoring solution for Meteor. At Kadira, I started React Storybook with my colleagues, but eventually, we needed to shut down Kadira. In late 2016, I discovered Next.js and started contributing to it. After Kadira's shutdown, I joined ZEIT to maintain Next.js and take it further. How would you describe Next.js to someone who has never heard of it?# I think everyone is familiar with the concept of JavaScript fatigue. Creating a web app with JavaScript is often hard with all of the packages and options that we have today. React, webpack, Redux, React Router and many more libraries and tools are often used and require effort to learn. In comparison, writing a simple PHP app can be as easy as just creating some files and deploying them. With Next.js we enable developers to build JavaScript web apps with more straightforward workflow like in the PHP example. Just create some files that export React components and deploy your app. No need to set up webpack or do any special routing or state management. Next.js also does server side rendering by default, among many other performance optimizations. How does Next.js work?# Let me show you with an example. We first create our project and initialize an npm package.json: mkdir hello-next cd hello-next npm init -y Then we install Next.js and the React dependencies and create a pages directory: npm install --save next react react-dom mkdir pages In the pages directory, we create a file at pages/index.js with the following content: import Link from "next/link"; export default () => ( <div> <p>Welcome, this is the home page.</p> <Link href="/about"> <a>About Page</a> </Link> </div> ); We also make a file called pages/about.js containing this code: export default () => <div>This is the about page.</div>; We add a script for the development server to the package.json: { "scripts": { "dev": "next" } } Finally, we run that script to start the development server: npm run dev The app will be started on https://localhost:3000. Any changes to pages and content will be updated instantly in the browser by webpack's Hot Module Replacement (HMR). Above is just the beginning. You can do a lot with Next.js. You can even customize the base webpack and Babel configuration too. I suggest visiting the Next.js repo for more info. How does Next.js differ from other solutions?# Here I'll focus on comparing Next.js with two other solutions for building React apps. 1. Custom webpack and Babel setup Here you need to maintain your configurations and update them for new versions of your dependencies. If you manage multiple apps, upgrading the dependencies and updating all configurations everywhere will be a real problem. If you use Next.js, you don't need to worry about these configurations. It comes along with sane defaults but also allows you to customize as needed. 2. Create React App (CRA) Create React App is Facebook's official solution for building React apps without build configuration. It works well for what it does. It doesn't, however, deal with routing, so you need to handle this on your own. Furthermore, you can't customize as much of the webpack and Babel configurations. Server side rendering is also complicated to do. For some apps, Create React App is a good solution. With Next.js, you'll get server side rendering for free and no need to worry about routing. The built-in routing system is file system-based, and custom routes can be set up for dynamic pages. Since the routing is built into the framework, we can do very cool things like: Server side rendering by default Automatic code splitting Simple data fetching solution for pages You can build a decent web app without worrying about configuration, routing and state management. Why did you develop Next.js?# I didn't work at ZEIT at the time it was built - it was primarily developed by @nkzawa to develop ZEIT's web app. Because it was a success, ZEIT released it as an open source project. Since then, features are developed when they are needed to continue building https://zeit.co, and the community helps by fixing bugs and requesting and developing new features. What next?# We try to keep Next.js as simple and lean as possible. We avoid implementing too many features. Instead, we aim to build a robust infrastructure and encourage reuse of existing libraries and frameworks on top of Next.js. We just released Next.js 3.0 with dynamic imports and static HTML exporting support. The next topics we will focus on are improving overall stability and reducing the dev and production build time of the app. What does the future look like for Next.js and web development in general? Can you see any particular trends?# I think we'll see more rich web apps in the future thanks to recent performance improvements in browsers. Web Assembly will have an enormous impact on the industry. Solid tooling will allow development of web apps available for both desktops and servers. Effects like these will lead to web apps to completely obsoleting desktop apps. Our goal with Next.js will always be to allow developers to build fast web apps without too much hassle with different APIs and configurations. What advice would you give to programmers getting into web development?# First of all, learn the basics well. For example, with front-end web development, learn the ins and outs of HTML, CSS, and JavaScript. Then focus on a couple of frameworks you like and develop a career on top of them. The industry is changing very rapidly, so always look for what's new and stay updated. Don't switch frameworks because there's something new and cool. Only do that if your current framework doesn't work well or if the new one increases your job opportunities. Who should I interview next?# JavaScript has a huge ecosystem. I don't have a specific person to mention. The GitHub trending page may have some interesting people to interview. Any last remarks?# As a project maintainer on GitHub, I appreciate it when developers search the web and the existing issues before creating new issues. If it's a new issue, always provide a way to reproduce the issue (often better as a GitHub repo). That saves us a ton of time so that we can fix legitimate problems and still add new features. Conclusion# Thanks for the interview Arunoda! I think it's great to see projects like Next.js pushing the envelope and finding better ways to develop universal web applications. Check out next.js GitHub and Learn Next.js to understand the topic better.

Rekit - Toolkit for building scalable React applications - Interview with Nate Wang

Perhaps the greatest thing about React is how flexible it is. It contains some opinions but not too many. You still have plenty of freedom. Sometimes this is a blessing, but it can also be a curse. Nate Wang realized the same, and as a result, he developed Rekit, a toolkit, around React, Redux, and other related technologies. Can you tell a bit about yourself?# How would you describe Rekit to someone who has never heard of it?# You can think of Rekit as an advanced create-react-app: It defines an opinionated approach to creating scalable web apps using React, Redux and React Router. It provides a powerful CLI and web-based interface to make sure the app always follows the Rekit way. How does Rekit work?# When starting a project, the first thing is to create a project folder structure and plan how to scale it when adding more features. Rekit can help to create such a project which scales well, following a feature oriented architecture: Rekit architecture After creating the project and installing dependencies, you can use the powerful Rekit tools to manage the project and generate boilerplate code. Rekit portal, a web-based tool shipped with Rekit 2.0, works as an IDE for React development. Components, actions, and routes can all be created, moved and deleted by the Rekit portal. Rekit portal You can find a live demo of Rekit portal online. How does Rekit differ from other solutions?# There have been many boilerplates and scaffolding tools for React apps. Rekit may be the most complete solution, the key differences being: It provides a production ready solution rather than a starter kit. It provides powerful tools like the command line interface and the web interfaces to manage the project. It may be the first tool that enables renaming and deletion of Redux actions, which is important for code refactoring. It supports the latest versions of dependencies like React 15.6, webpack 3, React Router 4 and so on and will stay up to date. Why did you develop Rekit?# I like to create tools to automate my daily work. Rekit was originally a toolset for helping create boilerplates for another project, the goal being boilerplate reuse in other projects. I realized that it might be useful to share this with others and so I created Rekit. It helped my team a lot. What next?# I'd like to make Rekit more robust by adding more test cases. The docs can also be improved, and more tutorials can be written. It would be nice if Rekit could become more popular and be acknowledged by more people. I would like to create more Rekit plugins to add new capabilities to Rekit, such as support for React Native, server-side rendering, etc. What does the future look like for Rekit and web development in general? Can you see any particular trends?# I think "JavaScript/frontend fatigue" is still the main pain point for web development. There are too many options and best practices, and it's hard to make decisions about which are better. Rekit is just a toolkit through which we share our best practices for creating web apps using React, Redux and React Router. We hope more people can benefit from it just like we do. To be honest, I can't see any particular trends, but I believe React will be the final winner. What advice would you give to programmers getting into web development?# Read specifications first, especially ECMAScript. It won't take long, and it's beneficial for understanding the foundations of web development. If you understand the specifications, you can know how React, Angular and any other library work. There is no magic behind them. Who should I interview next?# afc163 if possible. He is one of the leading developers of Ant Design which I think is the best UI library for React. Any last remarks?# Thanks to those who contribute to Rekit by reporting issues, asking questions and recommending Rekit to others! You all help to make Rekit better. Conclusion# Thanks for the interview Nate! It's nice to see solutions like this appear around React as they address specific pain points the community has. Learn more about the project at Rekit site.

Redux-First Router - Just dispatch actions - Interview with James Gillmore

Routing is one of those classic topics that comes up again and again. HTML5 History API itself is quite simple, but there are different opinions on how to apply the idea of routing on web applications and sites. James Gillmore decided to tackle the problem particularly for React and Redux in his Redux-First Router. Can you tell a bit about yourself?# I started 12 years ago (2005) by hiring other developers to build things. I worked at a music studio in Times Square, NYC and hired developers to build websites for our clients. Eventually, I even got into the MySpace spamming thing. I had been doing a lot of audio engineering, but soon realized that my talents were best more directly applied to technology. My dreams of becoming a famous beat-maker never came true. So I jumped ship, started my boutique web development agency, FaceySpacey Technologies (2007), got myself into trouble completing products on time for clients, and became a coder as a matter of necessity (2010). The rest is history - a very long history of self-mastery (both in programming and the real world). How would you describe Redux-First Router to someone who has never heard of it?# Redux-First Router is something that should have existed long ago, but because the React community at the time was caught up in throwing out so much ancient wisdom, it was skipped over. Redux-First Router completes the triumvirate of the MVC (Model-View-Controller), adding the C to the equation (where Redux is the M and React the V). It was as if nobody wanted to hear the letters MVC again. It's anathema to me too, but it is what it is. It also kills the "everything is a component" concept when it comes to routes. Redux-First Router follows the philosophy that "everything is state" and routes are 100% in sync with actions to trigger that state. The components that make up your view layer just render from the state as they should. How does Redux-First Router work?# With Redux-First Router, actions are dispatched as a result of visiting URLs and conversely, the address bar is updated in response to matching actions. What's significant and special about it is something that was never done before: the actions dispatched have unique types that only the given kind of URL has. This is unique because previous attempts at Redux routing (such as redux-little-router) dispatch actions that all have the same type: the equivalent of LOCATION_CHANGED. Then you have to dig through lots of information nested in that action to figure out what happened. It wasn't conducive to switching over types as is idiomatic in reducers. As obvious as it sounds, having a type that is as informative as actions you manually dispatch is the key ingredient that finally makes routing seamless for Redux apps. How does Redux-First Router differ from other solutions?# No <Route /> components# Initial attempts at Redux routing followed the React Router approach that "everything is a component". As one of our initial users put it, "cargo culting the same React Router stuff as everyone else just didn't feel right, felt leaky, felt hacky". The fact of the matter is that keeping any state (especially URL state) in the view layer has been an anti-pattern for a long time now, yet somehow React convinced us that it was the exception. It makes sense when you don't have Redux (or another data store). But when you do, a lot more compelling opportunities unveil themselves to you. Redux-First Router avoids using the same misplaced <Route /> components from React Router and similar packages. These packages promoted their version of <Route />, rather than take advantage of how Redux removes the state from the view layer. <Route /> is for developers for whom Redux is still out of reach. Contrary to how easy seasoned developers may feel Redux is, it's difficult for the novice and even intermediate developers. So I think <Route /> still makes sense for a broad category of users, but for power users, Redux-First Router kills the <Route /> component as well as the "state-within-the-view-layer" anti-pattern. The Redux philosophy# Everything you do with Redux-First Router mirrors the terminology and thinking of Redux itself. The primary example is the thunk option attached to routes - it has the same API as used with redux-thunk, i.e. the dispatch and getState arguments. Also, it's about what Redux-First Router stands for. The fact that from the start you are encouraged to get the most out of Redux is a meaningful thing. Philosophy is useful. The same way all the contracts Dan Abramov enforced on us have made our lives easier. Further unique features# Here's a quick list of more of the unique and powerful things you can do with Redux-First Router: route-triggered data-fetching Prefetching of (both chunks + data!) React Native support (Android BackHandler, Deep Linking, Push Notifications) Sick stuff to make React Navigation actually work with Redux (this is our best stuff which you'll be hearing about soon) Top notch server-side rendering idioms Everything you'd expect (redirects, 404s, scroll restoration, components, automatic document.title management, page-leave confirmations, dynamic route adding for code-splitting, history entries state, the list goes on.) Most attempts at Redux-specific routing have been pretty bare bones and never got around to polishing enough to make them a full-featured solution. While there's always some feature that will need to be added, Redux-First Router is something I've committed to for the community. So no expense has been spared! Why did you develop Redux-First Router?# Here's what I haven't told anyone and SurviveJS are the first to hear it: I am a relative newcomer to React (December 2015). I decided to skip straight to React Native (so I could skip mastering webpack and all the related choices). I had a client project I had to build with a deadline. I jumped right in and built it using Redux. Near the end of the app, I had to add deep linking and push notifications. So I wanted to find a way to make the app URL-driven without changing much code. I began looking into other redux routing solutions for RN, and since those weren't doing the trick, I decided to check out what they were doing on the plain web. I then realized there wasn't anything anywhere that allowed you to build your Redux app in a URL-independent way, yet while still having support for URLs. At that point, it occurred to me that the obvious solution was to make your regular flux standard actions somehow be representative of URLs. That's where the whole "action types as paths" concept was born. I built the router and had to change almost no code. Since all my actions were Flux Standard Actions with payload objects, it was only a matter of setting up the routing config and then doing a few changes in reducers. It was not all that complicated to do, so I was surprised that nobody had taken that route yet. Tell us more about not having to change your code# Well, the point is that you have fewer actions when using Redux-First Router. Having fewer actions is a good, given how the number of actions can get out of control. Instead of having one setter-style action to show a drawer, and another setter-style action to close it (e.g. 'OPEN_DRAWER' and 'CLOSE_DRAWER'), you simply have 'FEED' and 'NOTIFICATIONS' which you'll need anyway. Then in the reducers, you must add some "tear down" code to open and close the drawer when you visit these routes. For instance, when you visit 'NOTIFICATIONS' the drawerOpen state is true and when you visit 'FEED', drawerOpen is false. Here's an example taken from the Redux-First Router solving the 80% use-case for Middleware article: Old approach with many setter actions const sidebarOpen = (state = false, action = {}) => { switch (action.type) { case 'SIDEBAR_CLOSED': return false; case 'SIDEBAR_OPEN': return true; default: return state; } } export default sidebarOpen; New approach with fewer actions and smarter / fatter reducers: const sidebarOpen = (state = false, action = {}) => { switch (action.type) { case 'HOME': case 'LIST': case 'VIDEO': case 'LOGIN': return false; case 'SETTINGS': return true; default: return state; } } export default sidebarOpen; So that's all I had to do - change my reducers, remove unnecessary actions being dispatched, make my reducers respond intelligently to a wider variety of actions, and voilà! In record time I now had URLs and could deep link into my app. What next?# I've been working on completing my Universal product line. That will have some exciting connections to Redux-First Router. The hint is the word "prefetching". My overall main priority has been building "Next.js for the rest of us". Put another way, the frameworkless approach to the best features from Next.js. Next.js is great, but very few power users want to get locked in their walled garden. So by the time I'm done with my initial vision, the only reasons to use Next.js will be that webpack configuration is either too much for you or just something you would rather not deal with. For truly professional apps, I can't see how seasoned developers would want it any other way. If you're aiming for the top spot in your given market, you want complete customization available to you. What does the future look like for redux-first-router and web development in general? Can you see any particular trends?# Simultaneous server-side rendering (SSR) and code splitting will become significant. It's been a gritty, time-consuming problem and nobody has wanted to solve it. My view is that the single page application is dead, and if you're not simultaneously SSRing and splitting you're doing it "wrong." Traffic from Google is the biggest driver for many businesses. It is a key component of basically anything online and to go without SSR is a mistake. Given the tools we are using are so heavy regarding bytes, it is also a mistake not to split your code. Both need to be done together. By the way, not splitting doesn't just increase bounce rates, it also compounds the Google problem, since Google likes fast sites. Until now, doing both SSR and code splitting has been a hair-pulling experience. Most people just gave up. I won't get into the nitty gritty of what the challenge is today. But you can read my code cracked for SSR + Splitting article and the recent React Universal Component 2.0 launch article to learn why. Oh, and by the way, SSR with Redux-First Router is the most idiomatic Redux has ever been on the server. And due to the way my Universal product line works regardless of which router you use, simultaneous SSR and code splitting is a dream with Redux-First Router. There is still some stuff left to do, and if you've heard that splitting isn't related to routing, you've been misled. To do it at the highest level, you need to do prefetching. So the connection between the router and splitting is a simple interface to specify which route to prefetch. That's all. Redux-First Router is the first solution that does this. Next.js has <Link prefetch />, but Redux-First Router has something far more powerful: automatic pre-fetching based on your current state, such as the current page a user is on, as recorded in Redux state. There's a one-time setup, but once it's working, potential next routes will be pre-fetched. There's one final thing to know about how Redux-First Router prefetching works: not just the "chunk" is prefetched, but the data from your route thunks as well! What advice would you give to programmers getting into web development?# Question your intentions before you do anything. You'll waste your time doing the wrong thing with intentions that are likely to shift and evolve. The paradox is that it takes a long time to reach the sort of maturity where your intentions become "better". I started out in the game, not as a developer, but an entrepreneur wanting to build an empire. Reality has long since kicked my ass as I forced myself to become a coder to dig myself out of a hole. That is to say, a long, VERY LONG, history. It's Not About Passion, It's About Crafstmanship# Mastering this craft is time-consuming. And it's not about passion. I like to think of myself as a straight-shooter cowboy type with a clarity of vision. Sure I have a passion for software, but it's more about the natural enthusiasm for creation and conception in general. The truth is I enjoy other things outside of staring at a screen far more. For me, it's about being a craftsman as a matter of maturity and integrity. To pay the bills, create value, and make real the stuff that only exists in my mind as I'm innately compelled to do. I've built the open source stuff I've built primarily because I refused to go another project without these boxes checked. Don't Focus on Open Source# Also, don't focus on open source. If you do, make sure that your intentions are truly pure and that it makes sense for the juncture where you happen to be. For me, I have things I plan to create, and for the time being, I have the luxury to go the long first-principled route, which happens to align with open source contribution. After all, getting your creations in front of the most people is what it's all about. But before anything else, get out in the real world. It's too easy to waste our lives a way in front of a computer. You'll miss everything. So with that thought, who has time for willy-nilly open source projects? Not me. If you do open source, make it count financially in one way or another. There are better places you can "give" and interact outside the digital realm. Programming is a business tool, a means to an end. No shame in that. Be about your business. Don't hide behind technology, whether that's your phone or immersing yourself in work. Learn people, follow what truly excites you. Empire Building is Fool's Gold# And if your intention is to build an empire and "change the world!", I sincerely ask you to question what that is really about for you. Most of the things we're building, someone else will build in a matter of time. The world doesn't need you to change it, and you're going to go through a lot of unnecessary pain trying. That's all I'll say about that for now. Doing what's natural for you is the most important thing. Forcing anything will lead to bad results. But we're only human, and forcing is more often than not a core aspect of our journey--to getting somewhere where we no longer force things. Force Yourself to Become a Better Programmer# So my advice to new programmers is: skip college lol, get yourself into a jam (or 2 or 3 or more lol) where you have to complete a product, and force yourself to become a better a programmer as the only option you perceive you have. Then after you know a thing or two with conviction, build something only you have the unique insights to build. Who should I interview next?# Interview someone who's leading the serverless charge when it comes to React. That's another trend that will explode soon as a few more puzzle pieces come together. Perhaps the Dawson guys. I haven't tried it yet, but I'd love to see serverless made stupid simple for the React crowd, and they seem to be on that path. Here's someone else who you should interview: @nchanged from FuseBox. Perhaps I have an addiction to debunking stale solutions. Even though I get a tremendous amount of value from webpack, I'd love to see it built from the ground up, and everything made a lot simpler while still being flexible. I know I'm not alone in that feeling. FuseBox seems to show promise of being able to do that, but perhaps it's easier said than done. Webpack is also getting easier by the day, so it may be unnecessary. Any last remarks?# If you've ever felt Redux deserved a routing solution native to its workflow, give Redux-First Router a try :) Conclusion# Thanks for the interview James! Redux-First Router seems like a great addition to the ecosystem! Below you have a chronological history of how my James' product lines have been progressing thus far: Redux-First Router: Release Article -- Everything Doesn't Need To Be A Component The "Sexy" On CodeSandBox Article The 80% Use Case Article: Data-Fetching + Middleware Debunking Universal: "Code Cracked"--The Article That Started It All The "Magic Comments" Article An Accidentally Popular Article On Importing Both JS + CSS ("dual imports") Announcing: React Universal Component 2.0 & babel-plugin-universal-import Repositories: redux-first-router 🚀 react-universal-component webpack-flush-chunks extract-css-chunks-webpack-plugin babel-plugin-universal-import And, if you'd like to give Redux-First Router a try, you can do so right here:

documentation.js - The documentation system for modern JavaScript - Interview with Tom MacWright

When you are using a library seriously, you will spend a lot of time with its documentation. It's one of those things that sets good libraries apart from the rest. Even a fantastic idea can go overlooked if it's too difficult to get into and understand. Writing good documentation isn't easy. Tom MacWright has developed documentation.js to help exactly with this. Can you tell a bit about yourself?# Mapbox. I wrote lots of libraries that sliced and diced geospatial data, showed it on screens, and helped people design maps. The last big project I worked on there was Mapbox Studio. There are enough hard problems in the world of maps to spend a lifetime trying to solve them, but I decided to try out some new domains. I've been taking a few months off to relax, so recently I've been spending more time training a few bonsai trees and maintaining open source projects. How would you describe documentation.js to someone who has never heard of it?# documentation.js is a program that generates documentation from the source code of other programs. The documentation is a combination of things that you write, like paragraphs, explaining what a function does, and things that documentation.js itself can infer, like the types of parameters and return values. How does documentation.js work?# To infer facts about source code, we stand on the shoulders of a giant, which in this case is named Babel. That project has an excellent JavaScript parser called Babylon, as well as lots of other tools for interacting with parsed JavaScript structures. Using these helpers, we can, for instance, find all functions declared in a file, ask them for parameter types, names, return values, and lots more. Then there's the trickiest step: combining that automatically-derived documentation with explicit documentation, things that people write themselves as source code comments. That's all done by merging tree data-structures and so on, and is one of the parts I most want to refactor! Then, finally, it has a significant output system that can generate JSON, Markdown, and HTML documentation. I want the output to be great as a carrot for people to write documentation, so the project itself is responsible for at least the official theme. How does documentation.js differ from other solutions?# Documentation is a crowded space! There are lots of documentation generators out there, which is why I maintain a big list of them on a wiki page called See Also. The biggest player out there is JSDoc, so I'll describe how documentation.js is different than it. First off, documentation.js has a system for automatically figuring out which files to document - doing the same trick as webpack or browserify to figure out what requires what and which functions are exported. I wanted that, so that code itself would be the authority that tells us what is a public interface and what is private. The other big difference is that documentation.js aims to be universal and modern. We want to support new JavaScript features shortly after they're announced, and to use Flow types as information for automatic documentation. I was seeing so much information that previously fit into documentation, like property types, instead of fitting into type systems like Flow, and I want to embrace that trend by leveraging that type information for documentation too. Why did you develop documentation.js?# I was using JSDoc quite a bit - first for the Turf project, and then for Mapbox GL JS documentation. JSDoc is a great tool and still in many ways better and more robust than documentation.js. But it's in a place where lots of people rely on its stability, and the JavaScript ecosystem has changed so much since it was designed. My thought was that building something from scratch could make new styles of documentation possible. Plus, it's a lot of fun. Documentation generation might seem bland at first glance, but it's an adventure through parsing, generation, static analysis, and so much more. What next?# My highest priority is to grow the documentation.js community. I have lots of strategies for doing this, but nothing has worked so far. That's the most important thing to me, and I also think to most maintainers of large projects. The other thing is learning from other projects. I've been amazed by the progress being made by projects like ESDoc and always think there are better ways I could structure documentation.js. Plus, there's an unending list of tasks that just entail keeping up with JavaScript itself: there are still some features like decorators that we don't support yet. What does the future look like for documentation.js and web development in general? Can you see any particular trends?# It's bright but uncertain: I love where the project is currently, but it's substantial and has many areas that could use their owners. In a perfect world we'd have a team of 4 or 5 and, for an example of the ownership, support for TypeScript could be "owned" by someone who needed and used that support on a daily basis. As it stands right now, I'm pumped when another project or company adopts the module, but the ratio of work to contributors continues to increase. Web development is weird and crazy right now. I think the biggest, most exciting development we'll see in the next year is the effect of the bleeding-edge tech that was introduced two years ago becoming standard in all browsers. Native support for ES6 modules, for instance, will change the landscape. What advice would you give to programmers getting into web development?# First of all, stay patient. Coding is challenging and frustrating, and the single most reliable sign I've noticed of whether people will succeed is their ability to cope with frustration. You might need to laugh at yourself, or stop and breathe, or take a walk. But find something that works, because the feeling of being fooled by a program that should work will never end. Secondly, you can get far by only working on the surface level, but you really shouldn't. Dig in, learn what's under the hood, read the code, and you'll get better, faster. Who should I interview next?# Titus Wormer has been doing for natural language what Babel has done for JavaScript. Lea Verou's mavo project is incredibly fascinating. Mary Rose Cook's projects about gaming, programming languages, and entry points for new people to tech are amazing. Any last remarks?# Thanks for having me! Stay chill, and remember: the open source community is just some people, and it could be you too. Only you can fight maintainer burnout by being friendly and contributing code, documentation, or even just love to your favorite projects. Conclusion# Thanks for the interview Tom! I hope people find your tool and we get enhanced documentation as a result. See documentation.js site and documentation.js GitHub page to learn more about the project.

Rill - Universal web application framework - Interview with Dylan Piercey

There's a lot of talk about universal web applications but developing them tends to be harder than it might sound. You will have to worry about the differences between environments, and you will find problems you might not have anticipated. To understand the topic better, I'm interviewing Dylan Piercey, the author of Rill, a framework designed to address this particular problem. Can you tell a bit about yourself?# Open source software has been my peaceful haven since I learned git. For me, programming is fun, especially on my terms, and FOSS is exactly that. How would you describe Rill to someone who has never heard of it?# Rill is my two-year-old baby. In JavaScript terms that means it's just turned 21. Jokes aside Rill is a tool that allows you to learn fewer tools. It is Koa designed and optimized from the ground up to work in the browser. So how does this help? Well, first of all, you get one router for both the browser and Node, meaning you can drop react-router and Koa. Secondly, you also get to think of building web applications as if you have a zero latency node server in every user's browser. With this, you can quickly create fault-tolerant, progressively-enhanced websites with minimal effort. Finally, it is a flexible abstraction, just like it is on the server-side already in Express and Koa. With Rill I have been able to replace many tools including Redux. It also supports many different rendering engines with more on the way. Rill also plays nicely with all of the other libraries making upgrading a bit easier. How does Rill work?# Depends on where you look. Rill on the server-side is more or less a rip-off of Koa with some careful forethought, but in the browser things get interesting. In the browser, Rill works by intercepting all <a> clicks and <form> submissions and pumping them through a browser-side router with the same API as on the server-side. It supports pretty much anything you can think of including cookies, redirects and sessions, all isomorphically implemented (i.e. on both the server and browser). There are a few huge wins here. For instance, you don't have to use any particular <Link> tags or similar and you aren't tied to React. The server-side also doesn't need to do anything fancy to handle links and forms. Lastly, you already know how links and forms work, so just use them. If you'd like to take a look at Rill's link/form hijacking logic it has been separated out into @rill/http, making the main Rill repository completely universal! How does Rill differ from other solutions?# It provides a unified router. While developing universal applications, I often found myself writing routes twice. As if that wasn't bad enough the syntax for the routers were often vastly different - try comparing react-router with Express and you'll see what I mean. Rill aims to simplify that and provides a consistent routing interface between the server and browser. It also works perfectly fine as a standalone router in either one. Take for instance the following example: import rill from 'rill' import bodyMiddleware from '@rill/body' import reactMiddleware from '@rill/react' // Setup app. const app = rill() // Use isomomorphic React renderer. app.use(reactMiddleware()) // Use an isomorphic form-data / body parser. app.use(bodyMiddleware()) // Register our form page route as normal. app.get('/my-form-page', ({ req, res })=> { res.body = <MyForm/> }) // Setup our post route. app.post('/my-form-submit', ({ req, res })=> { // Analyze the response body (works in node and the browser). req.body //-> { email: ... } // Perform the business logic (typically calling some api). ... // Finally, take the user somewhere meaningful. res.redirect('/thank-you') }) // Start app. app.listen({ port: 3000 }) // Simple full page react component with a form. function MyForm (props) { return ( <html> <head> <title>My App</title> </head> <body> <form action="/my-form-submit" method="POST"> Email: <input name="email"> <button type="submit">Subscribe</button> </form> <script src="/app.js"/> </body> </html> ); } Notice how similar this looks to the server only code. You get to use middleware and routing in a way you probably already know. However, the above example when compiled with webpack, Rollup, or Browserify will also work in the browser! For a more detailed example check out the TodoMVC implementation with React and Rill. Why did you develop Rill?# I've built 20+ websites and applications all of which needed strong SEO and proper fallbacks for our users using legacy browsers. It became a constant struggle to enhance content for modern browsers while maintaining support for older ones. Rather than building a server-side solution and then rebuilding a client side solution my goal was to make a framework that allowed me to do both at once. It was originally inspired by koa-client and monorouter, and it turned out to be a robust solution. What next?# Well, that's largely up to what I build next and what the community requires. Rill has been pretty stable for the past year. Most of the major work has caused no breaking changes. One of the more recent changes is that Rill is now able to run in a service worker, which I think could be interesting for offloading the router to another thread. Another thing that I have meant to explore is a creating a Rill middleware that works similarly to ViralJS allowing for distributed rendering of applications. Something that's been in the back of my head for a while now is making Rill work on other platforms. The code has been formatted in such a way that the document logic has all been extracted into a single file, but I have limited experience with native applications and need a kick to get me going on this front. What does the future look like for Rill and web development in general? Can you see any particular trends?# For Rill the future is hard to see. I've mentioned some obvious features above, but the point of it, as with any router, is to be flexible. Rill in my eyes is a foundation for isomorphic/universal apps and what I've built with it so far is only the tip of the iceberg. In general, I think that things are going to get simpler, faster and smaller. It never seems that way while I'm riding the wave of JavaScript frameworks, but at the same time things are constantly popping up like svelte and choo, which are all considerably simpler than their predecessors and also faster and smaller. However, the main reason I think this is the case is that the web will eventually bake in much more of the functionality that is needed for modern applications such as web components. I think the abstractions will get lighter and lighter until they fade away. At least I hope this trend continues. 😜 What advice would you give to programmers getting into web development?# Make a GitHub/Twitter account and follow everyone who's doing something cool. You have teachers all around you, and excellent open source software sets a standard to which you can eventually live up to. Don't sweat the stuff you don't know but try to be aware of it. Learn things when you need them and actively search out new solutions when you find that yours are lacking. Find something fun to build. It's far too easy for your day job to ruin programming for you. Try to find genuinely interesting and fun things and work on them when you have time. Who should I interview next?# I'd love to hear more from Patrick Steele-Idem on all of the crazy optimizations available with MarkoJS and where the team thinks it's going. I hope a stable Rill integration is coming soon. 😄 I'm also constantly impressed by the quality of modules pumped out by Yoshua Wuyts and would be interested in his approach to building them. Any last remarks?# Rill is a lesser-known tool and I am always eager to receive community feedback. If anyone has any questions or just wants to chat, you can always find me on the gitter. Thanks SurviveJS for the interview and Rich Harris for the recommendation. Conclusion# Thanks for the interview Dylan! The approach Rill uses is refreshing and I hope people find it. Check out Rill site and Rill GitHub page to learn more about it.

d-l-l - Easy, automatic, optimized DLL config handler for webpack - Interview with James Wiens

Perhaps one of my favorite webpack performance related tricks is setting up DLLs so that you avoid work. The problem is that maintaining the setup requires time and effort. What if there was a better way? James Wiens has been exploring a better solution with d-l-l. Can you tell a bit about yourself?# James Wiens 👋 I'm a flow state enthusiast and crafting code is my life's passion. I'm from Vancouver, Canada, eh. How would you describe d-l-l to someone who has never heard of it?# d-l-l makes your webpack build faster in just a few lines, without having to waste time on the slow manual configuration steps required to use the DllPlugin. The DllPlugin lets you pre-build the parts of your code that don't often change (such as library code). This means when you change the parts that do change more often, webpack only needs to build these parts, which makes builds exponentially faster. d-l-l adds some helpful utilities for finding and adding dependencies and files that do not often change. Webpack dinosaurs What's a minimal example using d-l-l?# const dll = require('d-l-l') module.exports = dll .init() // Directory to resolve paths from .dir(__dirname) // Pass in webpack config .config(config) // Filter to only use non-dev dependencies .pkgDeps((deps, dev, all) => deps) // Find all src files .find('src/**/*.+(js|jsx)') // Filter to files last modified at least a day ago .lastModifiedFilter({days: 1}) // Return an array of webpack configurations .toConfig() How does d-l-l work?# What if I told you that you could make a webpack build go from 1 minute to 1 second? d-l-l creates an array of webpack configuration consisting of a DLL-only webpack config followed by the existing config from your webpack.config.js. Cache files are created in a .fliphub folder, which allows some smart-ish checks such as: Analysis of your webpack config Extraction of essential parts from it, such as the output path Usage of the configuration passed via .config() The cache files also allow d-l-l to add the decorated dll config if no cache folder or files exist or if there are no manifest files showing what was built and where. When the cache should be cleared is configurable: when cache-busting-files are modified every X (default 33) builds a day or more has passed since the last build Advanced Example# Now that we've covered a bit of background, an advanced use case should be more understandable: const dll = require('d-l-l') const configs = dll .init() // Verbose debugging .debug(true) // Force building of DLL .shouldBeUsed(true) // Return original config, makes it easy to swap side-effect free .og(true) // Same as in the simple example above .dir(__dirname) .config(config) // Provide resolved dependency paths manually .deps(['lodash', 'inferno'].map(dep => require.resolve(dep))) // Filter dependencies in package.json .pkgDeps((deps, devDeps, allDeps)) => { // Ignore dependencies that have `dev` in them. // Development tools are one example. return deps.filter(dep => !/dev/.test(dep)) }) // Find files matching a glob .find('src/**/*.+(js|jsx)') .lastModifiedFilter({days: 1}) // Return an array of webpack configurations .toConfig() How does d-l-l differ from other solutions?# There are no other solutions. The only other option is do everything d-l-l does yourself manually. Doing this means maintaining the additional DLL configuration and referencing it in your code. The point of d-l-l is to avoid this complexity. Editor's note: autodll-webpack-plugin is a comparable plugin based solution. Why did you develop d-l-l?# I was developing fliphub and found there was no webpack documentation for the DllPlugin. As I researched and experimented with the plugin, I discovered how powerful it was but how clunky it was to configure it. To expand on what I mean by clunky, the DllPlugin requires two separate webpack configurations! The order is important - the DLL config has to be built before the normal config. If the normal config uses the DLLReferencePlugin before the DLL config has been built, the build will fail. Adding even more commands to the build process wasn't going to happen, so d-l-l was born. What next?# d-l-l will be updated with more features. In an ideal future, the core solution it provides would be integrated into webpack core. The minimum, most effective plan to integrate it into the core would involve the following changes for d-l-l: Trim down dependencies Improve focus in logic and the code domain Extract features enabling ease of use Cover edge cases with air-tight tests Merge to webpack core Once that would be done, the whole community could benefit from the functionality. chain-able# All of the libraries I create use chain-able, which enables me to easily create interfaces that describe their intentions and make simple solutions for complex problems. webpack-wrap# I plan to create a wrapper library around webpack (webpack-wrap), allowing easy and smart configuration by following this plan: Abstract the d-l-l wrapper Simplify splitting with the webpack-split-plugin Enable webpack merging using neutrino presets in your webpack config Finish happypack2 and chain-able-webpack which allows for: Automatic wrapping of configs in a similar fashion Automatic traversable path resolving (resolving all relative paths in your config) Integration with webpack-cli Hints for common misconfigurations What does the future look like for web development in general? Can you see any particular trends?# Tools and language support can be improved. Developers want to use the coolest hottest sugar syntax which sometimes still needs advanced skills. Companies competing in open source for developers will promote their particular flavor of the latest and greatest tech. Artificial intelligence will be easier to use and more widespread in both open source and private code. What advice would you give to programmers getting into web development?# I couldn't fit it reasonably in this block, so I made it into a repo: awesome-advice. 15-minute rule (proverbial) If you ask for help on a problem before doing at least 15 minutes of work researching, debugging, and defining your problem, you're doing the other person a disservice. If you wait longer than 45 minutes and you are stuck, you are doing yourself a disservice. The three essential skills in programming: #1. how to research #2. how to research #3. how to research The better the problem is defined, the better the solution will be Have variable names describe their intention Premature optimization is the root of all evil Make it debuggable Join the community and contribute Who should I interview next?# Eli Perelman nchanged Conclusion# Thanks for the interview James! I hope this work eventually finds its way to webpack proper. That would make the approach more approachable to a lot of people. You can find d-l-l on GitHub. See also the following resources for further information: Official webpack DLL example Robert Knight's article on the DllPlugin InVision on optimizing webpack builds with the DllPlugin Caching assets long term with the DllPlugin DllPlugin question and answer on Stack Overflow

Material-UI - React Components that Implement Google's Material Design - Interview with Olivier Tassinari

Design is difficult as you have to come up with a set of rules to describe it – a system. You don't always have to devise one yourself, and Material Design by Google is one starting point. To understand the topic better, I'm interviewing Olivier Tassinari, one of the authors of Material UI. It's a collection of React components which implement the system. Can you tell a bit about yourself?# grandes écoles in France with a Master's Degree in computer science. Sometime later I worked at Doctolib, the leading booking platform, and management software provider for doctors in France. Besides coding I love sports, swimming, running and from time to time climbing. I'm training to beat my 10k record next year. How would you describe Material-UI to someone who has never heard of it?# Material-UI provides user interface components which can be reused in different contexts. That's our core mission - we are a UI library. The React, Angular, Vue, Ember and Polymer ecosystems all have the concept of components. We have chosen to implement the Material Design Specification in React components. Let's say you want to display a nice button, all you need to do is the following (example for Material-UI v1): import Button from 'material-ui/Button'; const MyApp = () => <Button>I Will Survive</Button>; export default MyApp; Editor's note: This would be a good chance to use babel-plugin-transform-imports as it can rewrite import { Button } from 'material-ui'; to above while still pulling the same amount of code to the project. How does Material-UI work?# Most of the heavy lifting in Material-UI is done by React and JSS. While we bet on React early in 2014 and have stuck with it, we are already at our third iteration on our choice of a styling solution. We started with Less, tried inline styles, and now are switching to CSS in JS thanks to JSS. One of the first things people ask when they find out about the library is how to customize the style of it. In the past our answer to that question was not ideal, but it's improving now. Through the evolution of components in different contexts, we have identified and addressed four types of customization going (ordered from most specific to most generic): Specific variation for a one-time situation Specific variation of a component made by a user and used in different contexts Material Design variations like with the buttons User global theme variation To learn more about JSS, see the interview of Oleg Slobodskoi, the author of the tool. How does Material-UI differ from other solutions?# It helps to understand the tradeoffs we have made. At some point when building a UI library or even a presentational component, one aspect will need to be prioritized over another. So let's see what we have prioritized and what we have not. I believe that most of the value of using a UI library comes from the API contract it provides. But at the same time, API design is one of the hardest things to do when building a UI library. We want the API to be consistent. We want to reduce the cognitive overhead of learning our API. Doing this is prioritized over an API tuned for specific contexts. We want the API to be low-level. By low-level, we mean close to the DOM elements. It's simpler to work with no abstraction than the wrong abstraction and low-level APIs more easily allow composition. We encourage users to build on top of it. If they create something that is helpful for more users, it can be integrated into the library. We have prioritized these things over a higher-level API. However, sometimes we have to trade consistency and level of abstraction to have a good enough implementation. We want our components to work in isolation as much as possible. For instance, we consider global styles an anti-pattern, not to mention their implications for code splitting. Also, developers should be able to use only one of our components without paying a large overhead. We want the implementation to be performant. We want our components to be easily customizable from the outside. We want our components to be accessible. Finally, we would rather support fewer use-cases well and allow people to build on top of the library than supporting more use-cases poorly. You can read further in our vision for the project. Why did you develop Material-UI?# The credit for creating Material-UI goes to Hai Nguyen. I have been contributing since six months after the first release. Ironically, my original motivation for choosing Material-UI for a fun-side project (to save time by using an existing React implementation of Material Design) is at odds with the effort I put in as a maintainer now. I have spent a lot of time improving the library. But I don't regret it as I have learned a lot in the process, ranging from how to conduct social interactions in a community to the ins and outs of the web stack, API design, visual testing and more. What comes next?# We are going to try to follow this plan: Publish the first beta releases. Fix the last API inconsistencies (in beta we will still make breaking changes). Merge the beta branch into master. Publish the first pre-releases and fix any issues that come up. Publish v1! 🎉 At that point, some features and components from v0.x will be missing in v1. So, what about them? Both versions can be used at the same time, meaning projects can progressively migrate to the new version, one component at the time. Over time and with help from the community, more and more components will be implemented in v1. Finally, in alignment with our vision, we would rather support fewer use-cases which may mean that some features and components will not be in the v1 core. All of the plans above are in our roadmap. What does the future look like for Material-UI and web development in general? Can you see any particular trends?# Material-UI is popular in the React ecosystem, but Google recently changed their strategy with material-components-web. Depending on how well material-components-web solves the problem, Material-UI might use it internally. But at the same time, Material-UI's goal goes further than just providing an elegant implementation of the Material Design guidelines. The Material Design specification sets the bar quite high, and developers should be able to benefit from that while easily customizing it for their needs. This customization work is what I have been collaborating on lately at work. We have been taking advantage of Material-UI's customization power to implement a brand-specific UI far from the Material Design specification. You can think of it as a Bootstrap theme. I believe this can be a useful strategy for other developers too. What advice would you give to programmers getting into web development?# Learn to focus on what matters. It's so easy to lose focus and work on the wrong thing. Learn the ins and outs of the abstractions you use. Keep the big picture in mind. Be curious about the details. In a codebase, nothing exists by chance. Learn and understand the "why" behind things. Sleep well. Stay active with something like sports. It's always how I get my best ideas. Who should I interview next?# Arunoda Susiripala for the awesome work he has been doing with the ZEIT team on Next.js. React was the last JavaScript project that I was as excited about as I am about Next.js. The user experience and developer experience is way beyond anything I have used before. Any last remarks?# Special thanks to the core Material-UI team: Hai Nguyen Matt Brookes Nathan Marks Kevin Ross Thank you Oleg Slobodskoi for open sourcing JSS. And thanks for having me on the blog! Conclusion# Thanks for the interview Olivier! It's great to see solid UI libraries for React as that has been traditionally a little weak point but looks like the situation is improving. See Material UI site and Material UI GitHub to learn more about the project.

Fall Tour - Vienna Clinics, ReactNext, WebExpo, ReactiveConf

Even though I have traveled a lot this year already, it looks like more travel is in store. The travels so far have been valuable regarding experience. I've seen more places in this year than during my previous year alone. And I like to think it has made me better rounded as a person. Getting outside of your comfort zone can be a good thing even if it is a little painful. As a result of my travels, I've gained a better insight on what to do next. And perhaps surprisingly I've won more good excuses to venture. Actions tend to generate reactions. Opportunities follow the brave ones and so on. I have some clear steps ahead of me. There are still blanks, but it's not a bad thing! Vienna, Austria - 23.7-26.8# I'll begin this trip from Vienna. To slow down a bit, I'll stay there an entire month. I like the city a lot, and there seem to be business opportunities available if you know where to find them. One advantage is that this also allows me to reach most of the central Europe quickly if some business comes up. I'll be running clinics during my time in Vienna. They are three-hour sessions focused on webpack. I have a particular set of slides to guide the discussion. Also, discussion over React is possible. The point of a clinic is to provide as much value as I can in two hours, and it can be adapted based on your needs. A clinic session of two hours costs 500 euros (VAT 0) for a company. I offer the same for individual developers for the price of 150 euros (capped to four developers per session). 20% of the income goes to Tobias Koppers as I want to support his efforts on webpack. You'll also have a chance to ask questions from him at the end of the session. I'll get the answers to you afterwards. I can also organize full day workshops in Vienna and nearby cities, even further. If you are interested, get in touch, and we'll make the schedule work. My schedule for Vienna is relaxed, and it gives me time to do certain technical work that has been pending a while. It's also an excellent opportunity to write and prepare two presentations! Tel Aviv, Israel - ReactNext - 6.9-12.9# I was invited to participate in ReactNext this year. I'll be talking about how to use webpack, particularly with React. I think it won't be a slide show and I hope to pull off something practical! So live coding might be in store with a high probability. Prague, Czech Republic - WebExpo - 19-25.9# I will give a presentation discussing how I built my business at WebExpo. It's going to be a mix of business, technical, and personal bits. A good time for a retrospective. Bratislava, Slovakia - ReactiveConf - 25-27.10# You'll see me in ReactiveConf this year as well. I don't have a presentation scheduled, but we'll do a webpack workshop on the third day. Conclusion# It looks like this will be travel filled year for me. It has been one of those personal growth years for me, and it has shaped the direction of the next years for sure. It is possible this isn't the last year of travel for me. Note that I have plenty of room in my schedule. So if you want to see me in your meetup or conference and are willing to cover my costs while providing me a chance to turn a profit, it might be difficult for me to say no! I'm especially open to opportunities available in Europe, and I may be able to bring some other developers with me.

dont-break - Check if you break dependents - Interview with Gleb Bahmutov

Releasing new versions of npm modules is an npm publish away. But how do you make sure you don't accidentally break a dependent project? Even if you are careful and test your module well, someone may be depending on a behavior you are not testing. dont-break, a tool by Gleb Bahmutov, was designed to address this problem. Can you tell a bit about yourself?# How would you describe dont-break to someone who has never heard of it?# Here is the problem with software development today: there are thousands millions of software modules in the world, and every language has its own public registry: Maven for Java, npm for JavaScript, and so on. Each module has multiple versions. Your module probably depends on some modules (upstream dependencies) and other modules may depend on your module (downstream dependencies). Updating Upstream Dependencies# If there are new versions of upstream dependencies, you should probably upgrade versions carefully. I wrote next-update, a CLI tool that tries each new version of an upstream dependency, runs your tests and if the tests pass, upgrades the dependencies to their new versions. There are services that automate this, like GreenKeeper and renovate. The feedback loop is pretty quick: the tool installs a new version of an upstream dependency, runs your tests and displays the result given the new version is compatible or breaks your module. Editor's note: Read the interview about renovate to learn more about it. The Problem of Downstream Dependencies# However, this does not address issues in downstream dependencies caused by new versions of your module. Maybe you changed the API or released a bug with the new version, and the downstream dependencies cannot upgrade without breaking their tests. The feedback loop is super long - you publish a new version, a while later maintainers of a downstream dependency try to upgrade to the new version, their tests fail, and finally a bug is opened in your project (if they feel generous). dont-break turns the tables - you can test downstream dependencies with new, unpublished versions of your module to see if the new code breaks them or not. How does dont-break work?# dont-break is a CLI tool that can find and test downstream dependencies of your npm module. Here's the basic algorithm: For each downstream dependency, firstly the repository will be found and cloned. To ensure the dependency's tests run from a new clone, the tests will be run once using the dependency's version of your module. If they pass, the new, unpublished version of your module will be copied into the dependency's node_modules directory, and the tests run again. If the tests also pass with your unpublished version, this dependency can be considered functional with your new version. This can be done for as many downstream dependencies as you'd like, and if no relevant issues are found, the new version of your module can be safely published. A good example is snap-shot-core, which is checked against its downstream dependencies snap-shot and schema-shot, among others. The project snap-shot is in turn checked against its downstream dependencies snap-shot-jest-test, snap-shot-ava-test, etc. In diagram form it looks like this: snap-shot in a diagram form The above slide is from my presentation that I highly recommend to anyone working in the npm ecosystem: Self Improving Software. It demonstrates dont-break via the dont-crack wrapper (see below). How does dont-break differ from other solutions?# As far as I know, there are no similar tools. Few people like publishing new versions of their modules that are compatible with previously published ones as much as I do (just kidding). Hopefully, in the near future, each project will always stay up to date and will be carefully tested against existing "users" (downstream dependencies) before releasing a patch version for example. Why did you develop dont-break?# I love, love, love writing new software (you can find links to my open source projects at https://glebbahmutov.com/), but I am lazy too. In my opinion, the easiest way to produce a lot of useful software is to do two things: Use existing modules rather than writing your own Keep dependencies up to date to benefit from new features and bug fixes written by others After publishing some modules, I noticed that I would routinely set up a lot of the same tools for each of them, so I automated my project setup. By using semantic-release, I also achieved an automated publishing process. The problem was that I still had to write tests to make sure minor and patch releases didn't cause issues for existing "users" (downstream dependencies). To avoid extra testing work, I started cloning downstream projects, copying my new code into those folders and running the tests. Bingo! The idea was born: why don't I automate this? What next?# I have two things planned for the future (well, one is well under way, the other is still just a vague idea). dont-crack# There is a semantic-release plugin I wrote called dont-crack that wraps dont-break to avoid having to run it locally. My Continuous Integration setup looks at the commits since the last release, and if it decides that it should publish a new minor or patch version (meaning, there should be no breaking API changes), it runs dont-break to confirm that downstream dependencies do not break. If a downstream dependency does break, that means our change was incompatible, and it should be published as a breaking major change. Doing this lets the downstream maintainers know there is an update, but it will require some work. Otherwise, we are all good and can safely publish a minor or patch version. Sending Reports Back to Other Projects# I am interested in how to send good bug reports back to your module if one of the downstream dependencies suddenly breaks. Imagine a new version of your module is breaking someone else's module. Do I just see a stack trace from their module - a project I do not maintain or know? Or can we somehow show precisely what behaves differently between the previous version of your module and the latest code? Solving the second problem will finally enable large source code monorepos to be split up. Working on an individual component and being able to test it against its dependencies in a nice, fast and useful way would be huge! What trends do you see in the future? What advice would you give to someone starting with web development?# I see a huge push towards immutable everything. I see this in data structures (you cannot modify an object because someone else relies on it), deployment (Docker, immutable infrastructure) and the npm registry (cannot unpublish a version already there). The concept of "this artifact is permanent and is not going to disappear" is nice. Thus my advice would be to learn how to update objects without mutating the original ones, find out how to deploy many times per day and learn about deploying a new system instead of tinkering with a running system. Who should I interview next?# Please, please interview my coworker, Brian Mann the founder of Cypress.io. Of course, it is a shameless plug, since we work together on the fantastic E2E testing tool (it is going to be open sourced soon, I promise). He has a good sense of what makes web application feature testing hard and why existing solutions like Selenium are not enough. Also, I disagree with Brian a lot, but love hearing his take on things! Conclusion# Thanks for the interview Gleb! I think if the community adopted tools like dont-break, that would be one step forward in solving the npm package quality problem. Perhaps some of these problems could be pushed to a service level to help people improve the quality of their work while enhancing the ecosystem. To learn more about dont-break, check out the GitHub project.

Rollup - Next-generation ES6 module bundler - Interview with Rich Harris

Given JavaScript application source cannot be consumed easily through the browser "as is" just yet, the process of bundling is needed. The point is to convert the source into a form the browser can understand. This is the reason why bundlers, such as Browserify, Rollup, or webpack exist. To dig deeper into the topic, I'm interviewing Rich Harris, the author of Rollup. I interviewed Rich earlier about Svelte, a UI framework of his. Can you tell a bit about yourself?# Rollup, Bublé and Svelte, among others, are all products of that. How would you describe Rollup to someone who has never heard of it?# Rollup is a module bundler. Basically, it concatenates JavaScript files, except you don't have to manually specify the order of them or worry about variable names in one file conflicting with names in another. Under the hood it's a bit more sophisticated than that, but in essence that's all it's doing — concatenating. The reason you'd use it is so that you can write software in a modular way — which is better for your sanity for lots of reasons — using the import and export keywords that were added to the language in ES2015. Since browsers and Node.js don't yet support ES2015 modules (ESM) natively, we have to bundle our modules in order to run them. Rollup can create self-executing <script> files, AMD modules, Node-friendly CommonJS modules, UMD modules (which are a combination of all three), or even ESM bundles that can be used in other projects. Which is ideal for libraries. In fact, most major JavaScript libraries that I can think of — React, Vue, Angular, Glimmer, D3, Three.js, PouchDB, Moment, Most.js, Preact, Redux, etc — are built with Rollup. How does Rollup work?# You give it an entry point — let's say index.js. Rollup will read that file and parse it using Acorn — this gives us something called an abstract syntax tree (AST). Once you have the AST you can discover lots of things about the code, such as which import declarations it contains. Let's say index.js has this line at the top: import foo from './foo.js'; That means that Rollup needs to resolve ./foo.js relative to index.js, load it, parse it, analyse it, lather, rinse and repeat until there are no more modules to import. Crucially, all these steps are pluggable, so you can augment Rollup with the ability to import from node_modules or compile ES2015 to ES5 in a sourcemap-aware way, for example. How does Rollup differ from other solutions?# Firstly, there's zero overhead. The traditional approach to bundling is to wrap every module in a function, put those functions in an array, and implement a require function that plucks those functions out of the array and executes them on demand. It turns out this is terrible for both bundle size and startup time. Instead, Rollup essentially just concatenates your code — there's no waste, and the resulting bundle minifies better. Some people call this 'scope hoisting'. Secondly, it removes unused code from the modules you import, which is called 'treeshaking' for reasons that no-one is certain of. It's worth noting that webpack implements a form of scope hoisting and treeshaking in the most recent version, so it's catching up to Rollup in terms of bundle size and startup time (though we're still ahead!). Webpack is generally considered the better option if you're building an app rather than a library, since it has a lot of features that Rollup doesn't — code splitting, dynamic imports and so-on. To understand the difference between the tools, read "Webpack and Rollup: the same but different". Why did you develop Rollup?# Necessity. None of the existing tools were good enough. A few years ago, I was working on a project called Ractive, and I was frustrated with our build process. The more we split the codebase up into modules, the larger the build got, because of the overhead I described earlier. We were effectively being penalised for doing the right thing. So I wrote a module bundler called Esperanto and released it as a separate open source project. Lo and behold, our builds shrank. But I wasn't satisfied, because I'd read something Jo Liss had written about how ESM — being designed with static analysis in mind — would allow us to do treeshaking. Esperanto didn't have that ability. Adding treeshaking to Esperanto would have been very difficult, so I burned it all and started over with Rollup. To learn more about ESM, read the interview of Bradley Farias. What next?# I would love to get Rollup to a place where we can call it 'done', so that I don't have to think about it any more. It's not an exciting project to work on, since module bundling is an incredibly boring subject. It's basically just plumbing — essential but unglamorous. There's a fair distance to go before we get there though. And I feel a certain responsibility to keep the community looked after, since I've been such a vocal advocate for ESM. We're getting to an exciting place though — browsers are just starting to add native module support, and now that webpack has scope hoisting, there are very tangible benefits to using ESM everywhere. So we'll hopefully see ESM take over from CommonJS modules very soon. (If you're still writing CommonJS, stop! You're just creating technical debt.) What does the future look like for Rollup and web development in general? Can you see any particular trends?# For one thing, Rollup will become increasingly obsolete. Once browsers support modules natively, there'll be a large class of applications for which bundling (and everything that goes with it — compiling, minifying and so on) will just be an optional performance optimisation, as opposed to a necessity. That's going to be huge, particularly for newcomers to web development. But at the same time we're increasingly using our build processes to add sophisticated capabilities to our applications. I'm a proponent of that — Svelte is a compiler that essentially writes your app for you from a declarative template — and it's only going to get more intense with the advent of WASM and other things. So we have these two seemingly contradictory trends happening simultaneously, and it'll be fascinating to see how they play out. What advice would you give to programmers getting into web development?# Watch other programmers over their shoulders. Read source code. Develop taste by building things, and being proud of them but never satisifed. Learn the fundamentals, because all abstractions are leaky. Learn what 'all abstractions are leaky' means. Turn your computer off and go outside, because most of your best programming will happen away from your keyboard. Most importantly, take programming advice with a pinch of salt. As soon as someone reaches the stage where people start asking them to offer advice, they forget what it was like to be a new developer. No-one knows anything anyway. Who should I interview next?# I really like following the work of people who straddle the line between JavaScript and disciplines like dataviz, WebGL, cartography and animation — people like Vladimir Agafonkin, Matthew Conlen, Sarah Drasner, Robert Monfera, and Tom MacWright. On the web development front more generally, I've been enjoying playing around with Rill by Dylan Piercey. It's a universal router that lets you write Express-style apps that also work in the browser, and it's really well thought through. For me it hits the sweet spot between boosting productivity and not being overly opinionated. Any last remarks?# Rollup would love your help! It's a fairly important part of the ecosystem nowadays, but I don't have nearly enough time to give it the attention it deserves, and the same is true for all our contributors. If you're interested in helping out with a tool that indirectly benefits millions (perhaps billions!) of web users, get in touch with us. Conclusion# Thanks for the interview Rich! Rollup is an amazing tool and well worth learning especially for library authors. I hope we can skip the entire bundling step one day as that would make things simpler. To learn more about Rollup, check out the online documentation. You can also find the project on GitHub.

JSS - Author CSS Using JavaScript as a Host Language - Interview with Oleg Slobodskoi

If there's one thing that divides web developers, it's styling. A part of this has to do with the different requirements of websites and web applications. What is good in another domain, is an anti-pattern in another. To understand the topic better, I am interviewing Oleg Slobodskoi, the author of JSS. Can you tell a bit about yourself?# Working on web UIs for over a decade, I have realized there are two significant challenges in frontend engineering: understanding the state and styling its representation. Unidirectional data flow has made managing state much easier, but styling components is still painful. To improve the situation, I started JSS back in 2014 and haven't stopped learning and developing the project since. Currently, I am working at Chatgrape where we are building a sophisticated client using NLP and deep services integration. All CSS is managed using JSS. Also, I try to talk at conferences from time to time, even if I know I suck at this haha. How would you describe JSS to someone who has never heard of it?# In general, "CSS in JS" libraries are authoring tools which allow you to generate CSS. The paradigm is similar to Sass, Less or Stylus in this regard, the difference being that the host language JavaScript is well-standardized. JSS is a set of libraries for writing CSS in JavaScript. They address a wide spectrum of issues. The most significant features are class names scoping, critical CSS extraction, significantly improved maintenance, code reuse and sharing, theming, co-location and state-driven styles. It is important to understand though that not every product has all of the issues that these features address, so not every developer can relate to them or even confirm that they are real. If you don't get it - don't worry, the time for you just hasn't come yet. One general truth you could take away from this is that JSS is a more powerful abstraction over CSS, which is good and bad at the same time. Less powerful abstractions may be of benefit for less experienced developers because less can be done incorrectly, but they certainly have limitations. How does JSS work?# The essential libraries in JSS are core, React-JSS, and Styled-JSS. Low level and library-agnostic, the core is responsible for compilation and rendering of a stylesheet. The core is used by both React-JSS and Styled-JSS internally. React-JSS is a higher-order component providing an interface for React. Styled-JSS is an alternative interface for React which implements the styled primitives factory. Styled primitive or styled component is a component which has initial styles applied when created. There is no need to provide class names when you use it. It has been very actively promoted by the Styled Components library and is worth looking into as an alternative to other interfaces. Our implementation, in fact, combines both styled primitives and a classes map in one solid interface. The general process goes like this: Declaration: Styles are described by the user in JavaScript. By default we use JSON Syntax. Processing: Styles are processed by JSS plugins. Plugins do vendor prefixing, implement syntactic sugar for user styles and can be made to do any other transformations, similar to PostCSS. Injection: Once you call the .attach method, styles are compiled to a CSS string and injected into the DOM using a style element. Examples# Example using the low level core library import jss from "jss"; import preset from "jss-preset-default"; // One-time setup. jss.setup(preset()); const styles = { button: { color: "red", }, }; // Compile and render the styles. const { classes } = jss.createStyleSheet(styles).attach(); document.body.innerHTML = ` <button class="${classes.button}"> My Button </button> `; Example using React-JSS import injectSheet from "react-jss"; const styles = { button: { color: "red", }, }; const Button = ({ classes }) => ( <button className={classes.button}>My Button</button> ); // Function injectSheet generates a HOC, which uses JSS and passes `classes` to the `Button`. const StyledButton = injectSheet(styles)(Button); Example using Styled-JSS import styled from "styled-jss"; // Produces a button which has the styles already applied. const MyButton = styled("button")({ color: "red", }); How does JSS differ from other solutions?# There are too many differences to name them all. To name a few: It is not one monolithic library# JSS is a set of libraries, each designed to solve a specific set of tasks strongly decoupled from each other. As a result, the user enjoys greater flexibility and cleaner abstractions. For example, the core is not coupled to React, which means it can be used with any framework. Plugin API# The plugin API allows you to manipulate sheets, rules and react on updates. In fact, most features are implemented internally as plugins as well. Focus on performance# Focus on performance has always been of the highest importance. JSS is one of the most performant libraries available. That said, it is hard to compare accurately because some features and implementation details differ a lot between libraries. We benchmark every possible small detail, and we track regressions for each change. Function values# Function values are now widely supported by other CSS in JS libraries. However, JSS differs in that it allows for high-performance JavaScript controlled animations like in the function value example. It is possible because JSS doesn't generate new CSS rules for each animation step. It is updating CSS values, the same way it would be done using inline styles. I wrote an article to give you more implementation details. Counter based class names generation by default# The main problem with auto-generated class names is that they need to be deterministic. In case you generate HTML and CSS from the server and then want to update both at runtime dynamically, you need to make sure the class names generated at runtime will always match those on the server. To solve these most libraries use hashes, though they have limitations: Performance: To create a hash the CSS rule declaration needs to be stringified and a hashing algorithm run, incurring overhead. Source order specificity: A number of equal CSS rules will be generated with identical class names, which will override each other. The problem is that application logic might expect the CSS rules in a certain order in the case that one rule is designed to override another rule based on the order of occurrence in the source code. In this case, source order can't be guaranteed and will result in rare but very unpleasant bugs. High-performance function values: these wouldn't be possible, because after update of any values, the hash would need to be recreated and the class name on the DOM node updated, leading to an unacceptable degradation in performance. Payload: Counter-based class names include a simple number which is incremented by each added rule. The number is the most compact, unique identifier available. Hashes are long and bloat the overall CSS size. No Inline Styles# JSS does not use any inline styles. Inline styles are slow if you overuse them. They are particularly slow in React. Why did you develop JSS?# It is funny because initially I just wanted to use JavaScript as a language to describe styles because I didn't want to learn Sass. Secondly, I didn't want to think how to name my classes in the global scope, because enforcing BEM is hard. Also, I wanted to eliminate the fear of changing any CSS and breaking unexpected things. Now it has become way more than that, but to put it in one sentence: it is the right abstraction for my tasks, and I enjoy using it. What next?# The foremost focus is on making the DX better: better documentation, auto-completion, syntax highlighting, React Native integration, a better CLI tool. The team has done a lot in the past, but a significant amount of work is still ahead of us, and we need highly skilled, motivated contributors to tackle all the challenges. I am trying to establish a distributed team of people responsible for different parts of this story. To give you an idea, consider the following contributions: Styled-JSS was written mostly by @_lttb and theming support is being added now by @iamstarkov. @wikiwi_io is working on the next version of our vendor prefixer and documentation site and the jss-expand plugin was developed by @typical001. Our logo was created by @okonetchnikov. I would love to continue this with more people on board with more dedication. I am seeing all the time how much they struggle to find time to work on it. For this reason, we recently started open source sponsor initiative to shape our industry. What does the future look like for JSS and web development in general? Can you see any particular trends?# One problem all CSS solutions have in common but that is especially problematic for CSS in JS is the lack of interoperability between the libraries. All CSS in JS solutions use a slightly different DSL to express the styles, which means that the styles are tightly coupled with the library which can parse them. The big picture looks quite bad right now. Upon installation any package from npm which uses any CSS in JS library different than what's used in the project already, one more library will be installed. Given the fact that currently there are 5-10 well-known CSS in JS solutions, the chances are good that your build will contain all of them at some point. To solve this, we started to work on the ISTF (Interoperable Styling Transfer Format) standard. The specification describes a CSS notation designed for high-performance parsing and will serve as an intermediate format for publishing. It is a layer between the consumer library and the authoring library/tool. Publishers will be able to transpile styles to this format before publishing a package to npm similar to what we do with Babel for ES6. Consumer libraries will then be able to use this format to render CSS most efficiently. I think this format is the future not only for all CSS in JS libraries but also for well-established languages like Sass. For the end-user, it means that they will be able to use any interface with any syntactic sugar they like to produce CSS, and the result can still be processed by just one library of their choice implementing ISTF, no matter whether it's on the server or the client. To those who prefer static CSS, don't worry, this case is on top of our priorities. We are not going to force you to generate CSS at runtime. What advice would you give to programmers getting into web development?# Take open source seriously. I learned 90% of what I know about computers and programming from it. Also, it is the best way to share the knowledge and become a better engineer and ultimately a better person. I am still learning and trying to become better. It is a lifelong process, so it is important to choose the way we do it wisely. Who should I interview next?# @iamstarkov created a unified theming solution for React which will be soon used by all the key CSS in JS libraries. @olivtassinari is doing a great job persistently maintaining Material UI library. @_developit is pushing the boundaries of what is possible within 3Kb. @iamsapegin created a tool called React Styleguidist which provides the best dev environment to write components. Editor's note: I interviewed Artem earlier about Styleguidist. Conclusion# Thanks for the interview Oleg! I share your sense of design when it comes to plugin systems. Composition seems like a strong way to solve a lot of problems even if you get certain news in return. You can learn more about JSS in GitHub and the official site of JSS.

renovate - Keep npm dependencies up-to-date - Interview with Rhys Arkins

There's one pain most JavaScript developers share - dependency management. More specifically, how to keep them up to date. Sometimes even one month is a long time as improvements keep coming and the dependencies changing. To understand a potential solution to this problem, I'm interviewing Rhys Arkins, the author of renovate. Can you tell a bit about yourself?# Key Location. Prior to this I was lucky to catch the tail end of a great period in "telecoms" via a startup that IPO'd and was later acquired. How would you describe renovate to someone who has never heard of it?# Renovate provides a way to automate the updating of package.json dependencies within a project's workflow via the use of branches, CI testing and pull requests. How does renovate work?# Renovate scans each repository for all package.json files, and checks with the npm registry if any existing dependencies have newer versions available. Once renovate has a list of upgrade candidates, it creates branches in the repository for testing each upgrade individually, and can also open pull requests - either immediately after the branch is created or after tests have completed. By default it also separates major releases into their own branches / pull requests. For example, you might be testing a patch update to webpack 2.x while also seeing if / where webpack 3.0 breaks in your build. It's somewhat configurable and tries not to be too opinionated, so almost every step above could be accompanied with a "...unless you configure it to..." disclaimer. How does renovate differ from other solutions?# The main alternative that many are familiar with is Greenkeeper, a commercial service for a similar purpose that has deservedly become fairly well known and used. Philosophically, renovate differs by being an "open source first" project where the primary aim is to allow people to run it themselves easily (e.g. with npm i -g renovate). Existing commercial services had / have the approach of "telling you when updates to your dependencies break your software". I prefer a default approach of locking down exactly what dependencies are present and not upgrading unless they pass tests. For instance, these other solutions pin dependency versions if something breaks, whereas I prefer to pin the versions by default, including using yarn or npm lock files. Technically, renovate has a few nice features which I believe are currently unique: Support for both GitHub and GitLab Autodiscovery of all package.json files in a monorepo Configurable options at global-, repository-, package file-, dependency type- and package-level (including using regex matching patterns to group related updates) Fully configurable branch, commit and pull request strings, via handlebars templates Automatic generation of yarn.lock and package-lock.json files with any package.json updates, if they already existed Policy-based automerge of dependencies (e.g. minor updates only, devDependencies-only, etc) once they pass tests, to reduce human work Branch-only automerges: Automerges can also be done with branch commits or merge pushes - no pull request necessary - which greatly reduces the daily GitHub notifications "noise" Keeping dependencies versions in a yarn.lock updated even if package.json versions haven't changed renovate is itself stateless and bases its logic solely on the npm registry and whatever is in the repository. So if there's a crash or resumption, there is no need to rebuild anything or worry about duplicates. Why did you develop renovate?# Like many others, I had a personal itch to scratch. I previously had needed to disable automatic dependency updates on my main website project because none of the existing services supported monorepo repository structures. After subsequently wasting half a day troubleshooting a browser issue which turned out to be caused and already fixed by a dependency I had missed updating, I decided I'd hack together a script to manage monorepo package.json updates from the CLI. Once I found out that it could be done relatively elegantly using the GitHub REST API (not requiring any git cloning), I decided to make it less hacky and open source it for others in a similar situation. So primarily this was driven by a technical need rather than any particular desire to build an open source version of something. What next?# I've recently added renovate as a free GitHub app. Again, the code for this is completely open source and I was happy to find out how simple it was to add the integration. As simple as running the script is, I think a lot of people prefer not to maintain yet another server or cron job in their routines so this is another option. Functionality-wise, I'm looking into: Improved traceability of logs, e.g. being able to filter to just a single dependency to work out why or why it wasn't updated to X Native "semantic" commit message support. Currently users can edit/override templates as they wish, but it would be nice to automatically support Angular-style semantic commits out of the box, for instance. What does the future look like for renovate and web development in general? Can you see any particular trends?# The draw of open source continues to be strong, and not just for any philosophical reasons. I now feel hesitant to adopt any libraries where I can't see the source and the issues or know what's going on under the hood, even if I don't intend to actively contribute. I was happy to see GitHub open source their Firebase JavaScript SDK recently, for example - a huge improvement on their previous approach which was closed in every way. One related trend I would like to see is the end to "snippets" for embedding closed-source third party libraries into websites. Developers need to seize back more control in terms of bundling, loading timing and priority, etc. The whole "this won't slow down your website" disclaimer that most use is obviously a load of bunk. There are few vendors supporting this approach so far (i.e. open sourcing their client JS code as an alternative to loading via snippet) and market forces would suggest this is because customer developers aren't asking loud enough. What advice would you give to programmers getting into web development?# You would be surprised at how much experience and exposure you can get by contributing small patches and fixes to existing open source libraries. Who should I interview next?# Once you start noticing certain prolific open source authors, it's like the yellow car phenomenom and you start noticing the same people everywhere. Gleb Bahmutov is one of those for me, although I'm not sure if he could easily decide which of his libraries to make a focus of an interview. Any last remarks?# Naturally I need to thank the hundreds of authors and maintainers of software I use every day, including as a part of renovate. And thanks for having me on the blog! Conclusion# Thanks for the interview Rhys! renovate certainly looks like a solid solution to an important problem. Learn more about the project at GitHub.

Most.js - Monadic streams for reactive programming - Interview with Brian Cavalier

If there's one trend that has been nice to notice, it's the rise of reactive programming. You can see this in technologies like RxJS and cycle.js. To learn more about the topic, I'm interviewing Brian Cavalier, one of the authors of Most.js. Can you tell a bit about yourself?# Brian Cavalier I'm a software engineer at Yelp in Pittsburgh, PA, where I work on Node-based web services and distributed systems. I had done all kinds of stuff before I started writing JavaScript: Basic, Assembly, C, C++, Ruby, ML, and way more Java than I want to admit. Recently, I've done a decent amount of Haskell, and have been actively digging into Purescript, Rust, and Idris. I love learning about how to solve problems in different ways. In 2007, I was working for a Pittsburgh startup as a Java server-side engineer. They wanted to create an ambitious web UI, and I ended up diving into the role of front-end JavaScript developer. A few years later, John Hann (unscriptable) and I created cujojs, and I became hooked on doing open source work. How would you describe Most.js to someone who has never heard of it?# Most.js is a library for reactive programming. It helps you combine streams of events, like DOM Events, to create highly interactive applications. Asynchronous programming is hard because trying to reason about when things happen and in what order is hard. Most.js makes this easier by giving you a declarative DSL for explicitly describing how asynchronous events relate to each other. For example, if your goal is to log all the mousemove events until the user clicks the mouse, you can declare that's what you want: import { mousemove, click } from `@most/dom-event` mousemove(document) .until(click(document)) .observe(e => console.log(e)) The ability to describe what the result should be, rather than having to try to detail all the steps of how to achieve it, is a central idea of Most.js's declarative functional API. How does Most.js work?# The primary architectural concept in Most.js is the Stream, which represents an asynchronous sequence of discrete events, like mouse clicks, or WebSocket messages. Under the hood, a Most.js Stream is a composition of two other important concepts: Source and Sink. A Source produces events, and a Sink consumes them. For example, a particular kind of Source may represent DOM events, like mousemove() and click() above, which produce DOM mousemove and click events on the document. In contrast, observe() is an example of a particular kind of Sink that consumes events, and passes them to a function you provide. The vast majority of operations involve both a Source and a Sink. For example. map(), which transforms all the events in a stream, acts as a Sink by consuming events, and as a Source then producing new event values after applying a function to them. mousemove(document) .until(click(document)) .map(event => `${event.clientX}, ${event.clientY}`) .observe(e => console.log(e)) So, when you create and transform a Most.js Stream, you're building up a chain of Sources and Sinks that represent the behavior of the Stream. However, Most.js Streams are not active until you consume them, by using one of the "terminal" combinators, observe, drain, or reduce. When you call one of those, the Stream sends a signal to the Source-Sink chain to the Source at the very beginning of the chain. That producer Source will then begin producing events. Events are then propagated synchronously from the Source through the Source-Sink chain by a simple method call. In the example above: The mousemove producer Source propagates a mousemove DOM event by calling the until Sink's event method. If the mouse hasn't yet been clicked on the document, the until Sink propagates an event to the map Sink by calling its event method. The map Sink then applies the mapping function to the event value and calls the observe Sink's event method. This direct synchronous method call event propagation model is one of the keys to Most.js's simple and performant internal architecture. Check out the Architecture wiki, to read more about the details of the Source-Sink chain, including how error handling works, and avoids having to try/catch in every combinator. How does Most.js differ from other solutions?# Performance# I think many people know Most.js because of its performance characteristics, and that was certainly a goal from the beginning, along with modularity and a simple API. The simple call stack event propagation architecture, plus hoisting try/catch out of combinator implementations were two of the earliest and biggest performance improvements. Most.js performs several other optimizations automatically, based on algebraic equivalences. A relatively well-known example is combining multiple map operations, e.g. map(g, map(f, stream)), into a single map by doing function composition on f and g. The operation also combines multiple filter operations, multiple merge operations, multiple take and skip, among others. These optimizations reduce the number of method calls needed to propagate an event from producer to consumer. Unapologetically Declarative# To me, though, Most.js's more strict adherence to a smaller declarative API is even more important, and maybe even a bigger differentiator. Asynchronous programming is complicated in general. JavaScript programs often deal with many interleaving asynchronous events, and as programmers, we have to try to coordinate all of them. Using imperative approaches, especially those that rely on the developer to manage shared mutable state, to try to coordinate highly asynchronous systems is difficult because we have to think carefully about the operational semantics of the system. We have to look at our static code and execute it in our heads to figure out the order(s) in which things might happen. Then, we have to convince ourselves that our code is correct for each possible ordering. As one example, Most.js event streams' core API doesn't provide an imperative "unsubscribe" function. Instead, you use combinators such as until, take, takeWhile, and skipAfter to declare, up front, the slice of an event stream you want. You declare what your intentions are, and Most.js takes care of the how and when. Why did you develop Most.js?# Two big personal reasons are learning, and that reactive programming is the way I want to be building front-end JS apps. I believe in learning by doing. I wanted to find out more about reactive programming and Functional Reactive Programming (FRP) because they just seemed like such a great fit for front-end JS development. After I had discovered reactive programming concepts, I started reading all the papers and source code I could find. Finally, I decided that the best way to learn even more was to try to implement something. That's basically how the project started. As for technical motivations, there were several. Performance, architectural and API simplicity, and modularity have been driving factors from the beginning. A while back, there was a GitHub issue asking why someone might pick Most.js over other reactive libs. I wrote a longer answer there with more detail about the technical reasons and differences with other libs. It's still a good read and sums up my motivation pretty well. What next?# @most/core# There are a few exciting things on the horizon. The Most.js team is working on @most/core, where we've extracted a minimal core of the Most.js architecture and combinators. It's a base reactive events package that has a strict focus on a lean, declarative API, and incorporates more functional programming concepts. For example, it has a functions-only API, where every function is curried, so you get partial application and function composition. It's also even more modular and exposes more pieces that other developers can use in building new event sources and combinators. For example, Most.js's high-performance scheduler is available in the @most/scheduler package. And we're planning to expose many of Most.js's internal testing tools as a part of @most/core. You can npm install --save @most/core to try it today. It's not yet 1.0, and we have some work to do on documentation and examples, but they're very usable. Most.js 2.0# These new @most/core packages will for the basis of Most.js 2.0. They're a separate project at the moment, but once they hit 1.0, we'll start the work of building Most.js 2.0 on top of them. Experimental packages# We're also experimenting with a package of continuous values, aka "Behaviors" or "Properties", values that vary over time, as a companion to Most.js's discrete event streams. The notion of continuous values is quite common in FRP in other functional languages, like Haskell and PureScript, and a few other JS reactive libraries, such as Bacon.js and kefir, provide continuous them. Some things can be modeled more simply as values that vary over time rather than as discrete occurrences (events). For example, a mouse click is fairly clearly a thing that occurs, an event. However, the position of a spaceship in a game is a value. It varies over time as the ship moves but doesn't occur per se. We're very use case driven, and we love feedback, so we encourage folks to try it out and give us feedback in gitter. What does the future look like for Most.js and web development in general? Can you see any particular trends?# I see a trend toward functional programming techniques in the JavaScript community. I think it's fascinating how JavaScript, being such a flexible language, can support both OO and functional techniques fairly effectively. Declarative (vs. imperative) programming seems to be on the rise and fits real with the similar swell in reactive programming techniques. Typescript and Flow have also raised the awareness of the benefits of strong static type systems. I think we'll continue to see more tooling around type checking: better IDE support, better type systems, code generators, tools for dealing with foreign data (like PureScript's foreign package). These technologies make everything safer by reducing the kinds of mistakes that can make it through to deployment. We plan to continue embracing these things in Most.js. For example, Most.js has a full set of TypeScript type definitions, and @most/core has a complete set of both TypeScript and Flow type definitions. We use type checking in the development of Most.js and @more/core, and even type check our unit tests. What advice would you give to programmers getting into web development?# There are a few things that have become very important to me in every bit of programming I do now - that transcend any project, library, framework, or programming paradigm du jour. Be fearless about learning# The first is learning by doing, or perhaps more accurately in my case, learning by trying and failing! One key has been learning that it's ok to fail. It's ok to read about a concept, or algorithm, or data structure in a blog or paper, and then write code solely to try to learn more about how the thing works. Make lots of mistakes trying to get the thing to work. Not everything has to become a long-lived project. If you learn something (even if its the best way not to do something!), you can take that with you no matter what happens to the code. Focus on simplicity# Simplicity has become the most important guiding principal in everything programming-related I do. Simplicity in code, API design, directory structure, project management, communicating with other team members ... everything. Simple is hard. It requires think time and sometimes trying and failing. Simple helps others. Sometimes it takes a while to reap the benefits of simple. On the other hand, "easy" may feel like it helps right now, but often lays a complexity land mine you (or someone else) will step on later. Often, you have to find a balance between the two. I always try to err on the side of simple when I can. Be kind# I've gotten way more from the open source web community than I've given to it. In many cases, that's been due to interacting with and learning from other developers who have treated me with respect and kindness. I'm very thankful for the excellent people in the web community who help others. At some point, you'll be the one who knows more than someone else. When it happens, be one of that kind, awesome people. Who should I interview next?# I think an interview with Tylor Steinberger, creator of Motorcycle.js, and a Most.js contributor would be great. It's amazing that he's completely self-taught. I've become a huge fan of Rollup, and I think it'd be cool to interview Rich Harris about it, and about modern JavaScript build tooling in general. Editor's note: Brian suggested interviewing Phil Freeman, the author PureScript. As it happens, I interviewed him earlier. So go check out the interview. Any last remarks?# I really want to thank the Most.js core team: Tylor Steinberger, David Chase, and Frederik Krautwald. They've contributed a ton of ideas and code, and they proposed the idea of @most/core. Given that Most.js started as a project to help me learn about reactive programming, I never expected it to become as popular as it has. Thanks to everyone who has supported it, who has sent a PR, and who is using it to build cool things! Conclusion# Thanks for the interview Brian! It's refreshing to see reactive approaches make their way to JavaScript. I feel a lot of these ideas are slowly but surely beginning to enter the mainstream as people discover their value. By changing your thinking you can forget about older problems while gaining more powerful constructs to use. To learn more about Most.js, head to Most.js GitHub page and study especially the examples.

React Alicante - The international React.js conference in Spain - Interview with Victoria Quirante

There are plenty of events out there. What is it like to organize one? I know it's hard work based on what I've seen. To get more perspective, I'm interviewing Victoria Quirante, one of the organizers of React Alicante, a new React conference organized late September in Spain. Can you tell a bit about yourself?# I started as a web developer back in 2008 and founded Limenius together with Nacho Martín in 2011. There we provide consulting, training and development services to other companies, working mainly with React, React Native, Elixir, and Symfony. Apart from coding, I love playing football (soccer), swimming outdoors, and reading. I have run a couple of half-marathons, and this summer I plan to kayak around the island of Menorca in eight days. How would you describe React Alicante to someone who has never heard of it?# React Alicante is an international conference focused on React.js and React Native hosted in sunny Alicante, Spain. Its first edition is going to take place on September 28-30, 2017. There will be one workshop day for beginners and two conference days with more advanced talks and case studies. What does React Alicante offer?# It offers the opportunity of spending a few days improving your developer skills, meeting people from around the world, and enjoying the food and warm weather from the Mediterranean coast of Spain. Regarding the content, the event will start with two workshops on Thursday, where participants will learn the fundamentals of React and React Native and code their first application in both technologies. The conference itself will take place on Friday and Saturday. It will be single-tracked, with 7-8 talks each day. The venue is a nice hotel (with a terrace pool!) close to Alicante’s port. Lunches and coffee breaks will be served at the hotel. We will also have refreshments at the end of each day, and a closing party on Saturday. How does React Alicante differ from other events?# Our idea is to make sure that the atmosphere is delightful for everyone. In my opinion, when organizing an event like this, there are two things that you need to care of: the quality of the talks and the quality of the networking opportunities. People attending want to learn, but they also want to have a good time and the chance to meet other developers. I think it is important to make that easy for them. If they finish the weekend not only with lots of new ideas but also with a few new friends, the experience is much more valuable. With that in mind, we are trying to attract people from as many different countries as possible. I honestly believe that things are more fun when you are surrounded by people from different places, and we are trying to create an event with an emphasis on that. Apart from that, three keywords: sun, beach, and paella. ;) Why did you decide to arrange React Alicante?# It has been a combination of a few things. After trying other front-end options in the past, we have been working with React already for a couple of years, and we are thrilled with it. We believe that both the React and the React Native developer communities should be growing in the next years, and we thought about helping a bit with that. On the other hand, in the last times, we have been attending a good amount of international conferences as speakers. Thanks to that, we have got a good picture of what other events offer, as well as an idea of what things work best for all parts involved: attendees, speakers, and sponsors. Finally, I think that in Spain there is room for an event like this and that Alicante is the perfect place to host it -because of the weather and the good connections by train and plane, thanks to being a popular tourist destination. To sum up, we thought that it was the right time and the right place to do it. What next?# The conference has just been announced, and the initial feedback has been really good, we are excited about it. But there is still a lot to do. We already have some confirmed speakers, and the call for papers will be open until June 30th. We plan to announce the full program on July 4th. After that, we need to make sure that everything gets in place for this first edition. What does the future look like for React Alicante?# We would like to turn it into a yearly event that many front-end developers out there want to attend. But of course, the first step is to focus on making this first edition a great experience for everyone attending and taking part. What advice would you give to programmers getting into web development?# Do not get obsessed with just one technology or too focused on only a particular task. Specialization is good, but the web development world changes fast, and at the beginning, it is more important to get a good foundation, rather that to learn a few libraries by heart. My advice would be to start trying to understand how everything works from a high level, then be able to implement a simple back-end and front-end by yourself. Try a few different technologies, be open, and then start choosing what things you like most. Specialization should be the last step, not the first one. And being open to the changes and new things coming is key. Who should I interview next?# I would suggest you interview Forbes Lindesay. He is the creator of pug, cabbie and ESDiscuss. He has been working on large-scale React applications on Facebook, and he will be giving the workshop “Introduction to React.js” at React Alicante 2017. Any last remarks?# One of the most enjoyable things about being a software developer is to meet others like you, share your success and horror developer stories with them, and learn from each other. Attending events like React Alicante is one of the best ways of doing it. I hope to see you there! :) Conclusion# Thanks for the interview Victoria! I hope you get a lot of great people at React Alicante. Maybe I get to participate a React conference one day. :) To learn more about the conference, go check React Alicante site.

Hard-cover Edition of the Webpack Book and Training

Print on Demand services like KDP are ideal for self-publishers like me as they take a lot of pain out of the process. You don't have to worry about printing or distribution. It's not free money as you still have to worry about marketing and getting your work noticed. To experiment with an option, I had a hard-cover edition of the webpack book made a while ago. Doing this gave me insight into the process and helped me to understand the cost structure, so it's easier to repeat the process if it seems worthwhile. It is surprisingly expensive to get a thick book (499 pages) printed and delivered from a remote location like Finland. My trial run was 40 copies of the book, and I have twenty copies left to sell. How to Get a Copy?# To account for the logistics and "rarity", I decided to price this edition at $100 per piece. You can get two for $150. To keep it fair, I include 30 minutes remote consulting in the price. Let me solve a hard problem for you or at least push you in the right direction. If you want one of the remaining copies, let me know and I'll get one sent to you after payment has gone through. Due to logistic issues, the books don't come with a signature. But if we meet, I'll be happy to sign the book for you. Webpack Training# As I've been completing my second tour, this has been a good chance to improve my training offering. I've pushed it to include more exercises, and I also redid most of my React material. I also have a two-day version of the webpack material. One day can be too intense especially with a mixed group of people. I'm organizing workshops in three places with the help of local partners. See the links below if you want to participate. The pricing depends on the location. webpack in Vienna, Austria - 100 euros or more for beginner workshop (full day), 200 euros or more for advanced workshop. I'm going to try kind of "pay what you want" pricing based on satisfaction here. Both take a full day. webpack in Augsburg, Germany - Prices for a full day begin from 399 euros. It is cheaper when you participate to both. webpack in London, UK - Two-hour super-condensed workshop for FullStack 2017. Let me know if you want a workshop in your city (preferably within Europe). Maybe we can organize something. We should also do a free meetup session then too. Webpack Book Extras# To reward the paying readers of my webpack book, I've added two extras to Leanpub edition. The first extra contains a two-page cheat sheet with the book essentials in a condensed format you can print out. I give signed copies to my workshop participants and random people I might encounter during my travels. I compiled the second extra based on QA sessions I've been running with Tobias. It's a roughly twenty-page document full of answers to hard webpack questions. Conclusion# There is still more work for me to do (next book for instance) but things are rolling in a good direction. I hope to announce more in the next status update! Enjoy the extras.

Working with Junior Developers - Interview with Aimee Knight

Even though software development is often seen from a technical perspective, there's a softer side to it. It is hard to avoid not having to work with people unless you are a mythical programmer living in a cave somewhere in Finland. To understand the topic better, I'm interviewing Aimee Knight about working with junior developers. Can you tell a bit about yourself?# Outside of work, I'm a weekly panelist on the JavaScript Jabber podcast, and I regularly participate in a variety others. In my spare time I love speaking at conferences, playing with new technology, running, working out, and trying new flavors of Kombucha! How do you work with junior developers?# My work with junior developers comes mostly in the form of mentorship. Indirectly, I also believe I'm able to reach juniors on a deep level by being a panelist on JavaScript Jabber and, through my conference talks and blog posts. As far as mentorship, I've worked one on one with developers in a more formal sense where we set up weekly chats, and I also devote a substantial amount of time each week making myself available for more one of conversations on the phone, through Twitter DM or email. I've found the latter to be the most common. In regards to JavaScript Jabber, I usually spend a bit of time prepping for each episode, and I always try to write down the questions that I initially have since I know others getting started will probably be in the same boat! For my conference talks, I spend a lot of time brainstorming ideas and organizing my content in a way that will make it approachable to newbies, but also valuable to someone who's been programming for a decade. That is the most challenging aspect of speaking for me, but as someone newer, it's essential to me. How does working with junior developers differ from working with senior developers?# Working with junior developers is only slightly different from working with a mid-level or senior developer in my opinion. With junior developers, it's imperative to have a level of awareness in the situation. In other words, a lot of juniors feel intimidated, so it's important to check in often and ask direct questions like, "does that make sense"? It's far too common for newer developers to not speak up due to the intimidation factor. I also advise when pairing to resist the urge to take over and let the junior drive! Why to work with junior developers?# I hear senior developers express to me how valuable and thought to provoke it's been for them when a junior presents them with carefully thought out questions. Statistically, it's also precious for seniors to work with juniors. Mentors are six times more likely to be promoted, and 20% more likely to get a raise. Besides that, working with a junior can do wonders for burnout. Most juniors, especially those from boot camps are extremely hungry. The energy and passion they have for the field are contagious! What does the future look like for web development in general? Can you see any particular trends?# There are so many things I could name, but personally, I'm most excited about PWA's and AI! As far as PWA's, I've always wanted to build for mobile, and even in web development, I think AI is going to have a huge impact. There are already things like The Grid that use machine learning to build websites! I recently bought Grokking Deep Learning and am excited to dive into it. What advice would you give to programmers getting into web development?# My advice is to get comfortable being uncomfortable. I have a talk on this... that's how strongly I feel about it being the key to success. For me, some of the hardest obstacles in my journey have been with my self-doubt. If you're aware of that though, you can shift your focus. We are all human, and we have a finite amount of mental energy. So, it's important that you spend it wisely! If you can learn to become comfortable with the feeling of not knowing, you're able to focus solely on the challenge at hand, and you'll inevitably be able to tackle it that much faster! I also encourage juniors to try and find a mentor or programming buddy. If you can find a mentor, it's probably the fastest way to progress. Developers who received mentoring were promoted five times more often than those who didn’t. If nothing else, you can try to work on some small open source projects and get mentorship in the form of code reviews there. Your First PR is an excellent resource for finding newbie friendly projects. Who should I interview next?# I'm a huge fan of Kyle Simpson, so I'd be excited to see an interview with him next! I love his approach to learning vanilla JavaScript over focusing on a framework when you're getting started, and his content is comprehensive while also being completely approachable! Any last remarks?# My latest deep dive topic has been CSS and browser internals. I spent two years as a full-stack JavaScript developer and made the switch to front end last summer. Working for an affiliate of Warner Bros means that design implementation is critical! It wasn't like the previous applications I'd worked on where the designs could be fudged a little. Applications for Warner Bros properties need to be pixel perfect! I quickly realized I struggled with debugging CSS in the same systematic way I debugged my JavaScript. So, whether you're a newer developer or you've been programming a while you may be interested in a blog post I just finished all about this! Conclusion# Thanks for the interview Aimee! I've found working with people from different backgrounds helps to give you perspective. Magic happens when you combine the views and find something new.

vx - The Power of D3 with the Benefits of React - Interview with Harrison Shoff

Data visualization is a big topic itself. When it comes to the web, D3 is perhaps the most well-known solution. Even though you can wrap it with React quite quickly, there is value in having specific solutions. This is where vx by Harrison Shoff comes in. Can you tell a bit about yourself?# wish lists, reviews, referrals, experience marketplace, the professional photography tool, customer support chat, and the old m.airbnb.com. Regarding open source, I created the Airbnb JavaScript Style Guide and worked on Airpal with Andy Kramolisch, and Chronos with Florian Leibert and Andy again. Currently, I'm on the Observability team at Airbnb, working on monitoring tools, data visualization, and a new open source project called vx. Outside of work, I enjoy playing golf poorly, reading programming books I don't understand, and going on adventures with my beautiful wife. How would you describe vx to someone who has never heard of it?# vx is a library of low-level react components that can be used to build up reusable charts, those one-off requests, or that particular idea you had for a visualization that you’ve never seen done before. vx combines the power of D3 with the joy of React. It's mainly unopinionated, and the idea is that you build on top of it, keep your bundle sizes down and use only the packages you need. You don't need to know D3 to use vx, but it helps. vx stands for visualization components. Below is a flow diagram that illustrates how vx could fit in at your organization: vx flow diagram How does vx work?# Under the hood, vx is using D3 for the calculations and math. D3 is the visualization kernel used to generate the data that flows to your components. If you’re creating your chart library on top of vx, it’s easy to create a component API that hides D3 entirely. Because of this you and your team could set up and share charts as quickly as using reusable React components. How does vx differ from other solutions?# To create a complete charting library, you would need to anticipate the needs of every chart out there. Instead of doing that, you tell vx what you want to make, and away you go. You only need to pull in the packages you need. No matter how you handle styling your components, how you store your state, or how you update your data, vx should feel familiar in any React codebase. Why did you develop vx?# Charting libraries are great until they’re not. And mixing two mental models for updating the DOM is never a right time. Copy and pasting D3 code into componentDidMount() is just that. The vx collection of components lets you easily build your reusable visualization charts or library without having to learn d3. I wanted to make my D3 code feel at home in my react codebase, keep filesize down, and not predict all of the different charts I would have to make in the future. What next?# We're on the road to a production ready v1 release, and it includes the following features: Accessibility. Increased browser support. More shapes. Animations and transitions. Easy interactions. Follow our progress here: https://github.com/hshoff/vx/projects/1 What does the future look like for vx and web development in general? Can you see any particular trends?# For vx: vx should work on the web, native, vr, everywhere. The current implementation depends on react-dom which means it's only available on the web. I'd like to explore using react-primitives-art for cross-platform support. Check out this talk by my colleague Leland Richardson about React as a Platform. In general: the world continues to shift towards browsing the internet on their phone. Most of the world isn't on wi-fi and doesn't have latest phone hardware. We should start to see more companies treat performance as a feature and not an afterthought. It's never been more exciting to be working on the frontend. What advice would you give to programmers getting into web development?# You don't need a computer science degree. You'll throw away 99% of the code you write over the first few years. There are no shortcuts. You just have to show up and put in the work. It's a lot of fun. Who should I interview next?# My colleague Jon Gold! He's working on the future of design tools at Airbnb. Check out his latest work React Sketch.app. It melts minds. And it's well made. Any last remarks?# vx stands on the shoulders of giants. Special thanks to: Mike Bostock + the d3 community and the react team + community for all of their lovely work! All of my colleagues at Airbnb for reviewing my code over the years! Shoutout to Issaquah, WA and the University of Washington. Thanks for having me on the blog! Conclusion# Thanks for the interview Harrison! I have a soft spot for computer graphics due to my background and combining React with D3 through vx seems like a fantastic idea to me. To learn more about the project, see vx GitHub page and study the online demos.

unmarshaller - Toolbox for configuration - Interview with Sven Sauleau

Serialization, or the process of transforming data from a shape to another, is a common problem you encounter eventually when programming. Perhaps you want to store some state to the hard drive from memory or restore it. Or you may want to share it across the wire and consume it somehow on the other end. That's where a related concept, marshalling, comes in. In this interview you will learn more about the topic as Sven Sauleau discusses the ideas behind his library, unmarshaller. Remember to check Sven's previous interview about async-reactor. How would you describe unmarshaller to someone who has never heard of it?# If you are not familiar with the term of marshalling, here is an excerpt from Wikipedia: In computer science, marshalling [...] is the process of transforming the memory representation of an object to a data format suitable for storage or transmission, and it is typically used when data must be moved between different parts of a computer program or from one program to another. Marshalling is similar to serialization and is used to communicate to remote objects with an object, in this case, a serialized object. It simplifies complex communication, using custom/complex objects to communicate instead of primitives. [...] When you have a lot of configuration, it's not easy to maintain or even to understand. unmarshaller enables you to describe your configuration in a flexible way. It also provides tools to improve configuration usage. How does unmarshaller work?# To use unmarshaller, you have to define lookups against your data. These can be custom, or you can use ones provided with unmarshaller. Lookup Function# During the unmarshalling process the lookup function will be called to get the value for a given key. If you want to extract values from an object the lookup will look like this: const lookupFn = key => myObject[key]; How to provide the values is up to you. You could get them from the URL, looking for DOM nodes, network requests, and so on. First, you need to declare your configuration in the unmarshaller object, here is an example: unmarshaller.js: // The builder is a set of helper functions to build // the unmarshaller object. // It has the builtin types: `string`, `number`, // `boolean`, `object` and `holder` that is used to // nest configurations. import { builder } from 'unmarshaller'; export const unmarshaller = { // `person_name` will be the key used as argument // in the lookup function name: builder.string('person_name', { // `description` is used for documentation // generation description: 'Name of the person', // If the lookup function didn't returned a value, // the default value will be used instead defaultValue: 'Sven', }), customProps: builder.object('custom_props', { description: 'Custom properties', defaultValue: {}, }), showAge: builder.boolean('person_show_age', { description: 'Show age of the person', defaultValue: true, }), age: builder.number('person_age', { description: 'Age of the person', defaultValue: -1, }), backgroundColor: builder.string('background_color', { description: 'Background color of the card', defaultValue: '#69b0dc', }), textColor: builder.string('font_color', { description: 'Font color', defaultValue: 'black', }), }; index.js: import { unmarshal } from 'unmarshaller'; // This is the unmarshaller object from the file above import { unmarshaller} from './unmarshaller.js' const lookupFn = (key) => myObject[key]; const config = unmarshal(lookupFn, unmarshaller); // `config` is a regular JavaScript object containing your configuration: // // { // name: ..., // backgroundColor: ..., // textColor: ..., // } console.log(config); Custom types# To be able to use custom types you need to extend the default builder. Here is an example of a color type: import { builder as defaultBuilder } from 'unmarshaller'; export const builder = { ...defaultBuilder, color: (name, options) => ({ name, parser: parseColor, // custom parser function type: 'color', ...options }), }; You can find more information about custom types in the documentation in the project repository. Error handling# In the case of a casting error, unmarshaller will always return the type you defined. For example, if you pass an invalid JSON string in builder.object it will return {} unless you have defined a defaultValue. How does unmarshaller differ from other solutions?# I didn't find an alternative solution to unmarshaller. There are some libraries which also adopt the idea of declarative configuration, but they only focus on one usage. For example, ajv uses a declarative configuration, but it doesn't serve the same goal since it's only for validations. To understand ajv better, read the interview with Evgeny Poberezkin. Why did you develop unmarshaller?# I made unmarshaller while I was working for a company. On some projects, we would use highly customizable React components (up to 70 different parameters). The configuration needed to be able to be set both by developers (passing props) and remotely by non-developers. Our unmarshaller lookup function got the configuration either from query parameters in the URL or by calling a function in our proprietary SDK. What next?# Better syntax?# The unmarshaller could also be JSX based (example for Webpack). The configuration could look like this: <unmarshaller> <string name="name" defaultValue="Sven" description="Name of the person" /> <holder name="colors"> <color name="background_color" defaultValue="#69b0dc" description="Background color" /> <color name="font_color" defaultValue="black" description="Font color" /> </holder> </unmarshaller> Most common lookup functions# Provide standard lookup functions, for example, to extract configuration values from URLs as this would allow users to use the functions that come with unmarshaller instead of having to write them themselves. Ahead of time processing# Create a Babel plugin to inline constant values in the unmarshaller object to avoid doing this at runtime. What does the future look like for unmarshaller and web development in general? Can you see any particular trends?# Since unmarshaller is flexible, I could imagine various tools built on top of it (for example, form validations). Any last remarks?# We have a few extra modules which are located in a second repository unmarshaller-extra: unmarshaller-generator-markdown Generate markdown documentation from a given unmarshaller object, containing the name, type, default value and a description of each configuration. In our use case, we display the documentation of our React component on GitHub and in a panel in our Storybook. The documentation will look like this: name type default value description background_color string #69b0dc Background color of the card font_color string black Font color name string Sven Name of the person You can find an example here. unmarshaller-generator-storybook-knobs Generate Storybook addons/knobs from the unmarshaller object. There's an example showing how to do this in the repository. More extras to come soon. unmarshaller is a flexible tool, I'm excited to see tools built on top of it soon. Conclusion# Thanks for the interview Sven! unmarshaller seems to handle the problem of marshalling admirably. To learn more about the project, see unmarshaller on GitHub.

Popper.js - Easy Tooltips and Popovers - Interview with Federico Zivolo

There are times when a vanilla <abbr> or <acronym> doesn't cut it. What if you want to do something more complex? Popper.js by Federico Zivolo achieves exactly this. Read on to learn more. Can you tell a bit about yourself?# How would you describe Popper.js to someone who has never heard of it?# Popper.js is a library to help you position tooltips, popovers, dropdowns and any contextual element that should appear near a button or similar (I call them "poppers"). In short, it's a piece of code that saves you hours of work on any of your projects, since almost all of them end up featuring some "popper". How does Popper.js work?# That's a good question; I'm still trying to figure it out! Jokes apart, the principle is pretty straightforward. It takes a reference element (usually a button) and a popper element (any element you want to position), it finds out a common offset parent, computes the position of the reference element relative to such parent, and then generates a set of coordinates use to position the popper element. The hardest part is to consider a whole set of edge cases which range from cross browser compatibilities to box model capillarities, including taking care of the scrollable elements. The usage is simple: new Popper(referenceElement, popperElement); This code will position the popperElement on the bottom of the provided referenceElement. Also, you already have access to all the built-in features of the library. The line also achieves the following: If the referenceElement is too close to the bottom of the viewport, the popperElement will be positioned on top of it instead. If the two elements are positioned in two different parents, Popper.js will take care of it and will still properly position the popper element correctly. It handles scrollable elements and page resizes. How does Popper.js differ from other solutions?# There aren't a lot of available solutions and they all cover a small subset of cases that are instead adequately addressed by Popper.js. The main difference is in the fact that my library doesn't need to manipulate the DOM directly to work. This fact leads to two strengths: it doesn't have to move the popper node in a different context to properly work and can be integrated into frameworks and view libraries such as React and AngularJS with ease. You can easily do this to delegate the DOM manipulation: new Popper(referenceElement, popperElement, { modifiers: { applyStyle: { enabled: false }, updateReactData: { order: 900, fn(data) { this.setState({ data }); return data; } }, }, }); We have disabled the built-in applyStyle modifier (they are like middleware, and most of the functionalities provided by Popper.js are provided by them), and defined our custom modifier that only proxies the computed popper coordinates and information to our React component. Now that you have all the knowledge provided by Popper.js, you can do whatever you need to apply the needed styles to the popper element. You may have noticed that my custom modifier is returning the data object at the end. This object is needed because other modifiers may run after it and read the data object. This chain-based approach makes Popper.js extensible; you can inject any custom function before or after any of the existing modifiers, disable the ones you don't need, and alter the behavior of others simply modifying the data stored in the data object. Why did you develop Popper.js?# At the time of the creation of Popper.js, I worked for a company which made large use of tooltips and popovers in their Ember.js application. We had an internal implementation of a positioning library similar to Popper.js, written mostly by two other team members and me. Its code was pretty messy because it had been developed just to work in our particular cases and it was deeply tied to the Ember.js internals. The time needed to maintain such library became a problem because we spent a significant portion of our time fixing bugs related to it. We then decided to outsource it and use an existing open source library to do the job. I performed the investigations to find a suitable alternative; the only available choices were Tether and jQuery UI Position. The latter, after some quick tests, ended up being too basic to be used in our context. The only way to use it would have been to fork it and add the missing features. Tether Was Promising But Not Enough# Tether was very promising, it supported a lot of features and performed quite well. But it had some pretty limiting constraints as the library arbitrarily moved our components away from their original DOM tree context to have them positioned as direct children of the body tag. This fact was a major problem because it interfered with the way Ember handled the DOM. One of the problems I remember is that our tests couldn't work because the testing environment of Ember looked for the DOM nodes only inside the root node of the Ember.js application. The other problem was the limited customizability of it; we couldn't add any additional behavior or feature to it. For instance, we couldn't make a tooltip switch from "right" to "bottom" in case there wasn't enough space on its right. It only allowed "right - left" and "top - bottom". A Custom Library Was Needed# I wanted to use an existing solution because I just wanted to get the job done, but with these premises, the only viable solution I found was to write my library. My company didn't have time to allocate to write it, so I ended up writing it during a weekend... What next?# Popper.js is getting adopted by more projects every day, and that's cool. My biggest "competitor" discontinued its library (Tether) and they now point to Popper.js, I hope to be able to serve their users as they deserve. Bootstrap recently merged a PR to use my library in their code base. I hope to see a larger number of contributions on my project as a result. Other great developers have developed integrations for Popper.js to use it in the most popular libraries such as React, Preact, and Vue.js; others are working to create one for Ember.js. Only Angular is behind and needs a proper integration. Certain outstanding issues that have to be fixed to handle all the edge cases. More tests have to be written to assure a high quality and reliability, and the API will probably need some makeover in the future. There is a lot of work and not much time available, but I'll do my best to maintain the library and improve it continuously. Some help would be very welcome. 😉 What does the future look like for Popper.js and web development in general? Can you see any particular trends?# The most innovative idea behind Popper.js is the modularity of it, no other similar libraries let you completely de-opt from any DOM manipulation and delegate them to your code. I think we may see more libraries follow this direction and make the life of other developers easier. Since the current front-end scenario is populated by a lot of different technologies, the library authors must adopt a model that allows the consumers to integrate them with the existing frameworks and libraries without compromises. What advice would you give to programmers getting into web development?# It may sound childish, a lot of folks will tell you that it's a matter of preferences and blah blah... But I think the future of the web development is in the functional, data-driven, development as promoted by Facebook with React. The whole idea of state management "introduced" [1] by those guys saved my team and me hundreds of hours of development already. If you are getting into web development, first learn the basics of the web: HTML, JavaScript, and CSS. Then, move to any framework or library that follows the data driven and functional principles, if not React, anything wich shares the same idea. Doing this will set you a mindset that will help you to handle and resolve any situation. [1]: Necessary note, Facebook didn't invent it, they simply promoted within the web development environment. Who should I interview next?# Travis Arnold (@souporserious), he is working on some cool responsive components libraries and worked on react-popper, he knows better than anyone else how to integrate libraries into React. Gajus Kuizinas (@kuizinas), he works on a lot of awesome stuff, but I especially like his ideas about CSS Modules vs. CSS in JS solutions. Nik Graf (@nikgraf), for his work with React VR! Any last remarks?# If you want to be a great developer, remember to have fun along the way. 🙃 Conclusion# Thanks for the interview Federico! If I need tooltips or popovers, I know where to look now. Remember to check Popper.js demos and Popper.js on GitHub.

SurviveJS Euro Summer Tour 2017

It's time for another tour. This time around I'll be focusing on training. I have material specifically for webpack and React and this will be a good excuse to improve my book offerings while at it. I've found working on training material forces you to think carefully and this work flows back to writing. This time around I'll spend time particularly in Slovakia, Germany, Austria, and the United Kingdom. I'll spend a few weeks at Vienna and there's room in the schedule. 31.5-12.6 - Košice, Slovakia# I'll start the tour from Košice and I'll be offering at least two sessions there, one of which will be public. 12.6-14.6 - Prague, Czech Republic# I get to spend a day at Prague, one of my favorite cities in Europe. I don't have any specific plans yet but I might go and see some Mucha art. 14.6-18.6 - Berlin, Germany# Most of my time in Berlin has been reserved by a client but I'll have time especially on 14th, 17th, and 18th. 18.6-2.7 - Vienna, Austria# I'll spend most of my time in Vienna. This is actually a good time for you to book me as it's easy to reach places from there. In the worst case I get some vacation time in a great city! 2.7-10.7 - Augsburg, Germany# The purpose of the Augsburg week is to focus on training through a local partner. 10.7-15.7 - London, UK# I'll participate FullStack 2017 as an invited speaker. I'll have a short, two hour workshop on 14th based on Webpack - From Apprentice to Master material. Conclusion# Compared to the previous tour this one will be more relaxed and I'll have more time available. The earlier experience will come in handy this time around and I'll be open to opportunities! If you are interested in my services, please check out my training offering. You can also try to invite me to your meetup if we can find some interesting topic and you are willing to cover associated costs.

ES Modules - Interview with Bradley Farias

Even though ES6 (ES2015) brought modules to the language, it missed one important thing - a loading method. Proper support is currently being implemented for browsers. To learn more about the topic, I'm interviewing Bradley Farias. Can you tell a bit about yourself?# TiddlyWiki was the first open source project that I worked on in college. It was a single page wiki that could save to disk back in 2005. That is what got me interested in JavaScript. I spent many hours trying to recreate various things such as a spreadsheet editor and a polyfill for Range in IE6. After college I have worked at different companies, eventually seeing Node at the end of 2009 and joining Nodejitsu in 2011 through 2013. Since then I have bounced around between front-end development with a focus on accessibility and lots of backend tooling workflows. Editor's note: I used TiddyWiki years ago as my personal wiki on a USB stick. How would you describe ES Modules to someone who has never heard of them?# They are a new mode of JavaScript code that allows you to link JavaScript variables between files. ES Modules are statically linked, meaning that when you import variables; the engine must link those variables before evaluating the module. The nature of if ES Modules are async or sync is unspecified in the JavaScript specification; so even though all environments are targeting making async module systems, someone could make a sync module system using them. Consider the example below: index.js // Request the `foo` variable from `./foo` be put into scope import { foo } from './foo'; foo.js const foo = 'foo'; // Mark `foo` as being exported export { foo }; How do ES Modules work?# Being a new mode of JavaScript, the first thing is that you have to get your environment to parse ES Modules. In ES2015 the plans for how to use ES Modules was in the specification. However, with no loading mechanism, there was no clear plan for browsers or servers as to how to load modules. It wasn't until sometime later that WHATWG proposed <script type=module> and Node proposed a new .mjs file extension to clarify to the environment how modules are loaded. Shared Variables Have to Be Linked Together# After being loaded, the engine needs to link together all the variables that are shared between modules. That means, all the modules in the dependency graph need to be available. The engine recursively reads each source text for the modules and finds all of the dependencies of the modules until there are none left. Throw an Error on Failure or Proceed and Hoist# If some modules cannot be found, the engine throws an error. Otherwise, it takes all variables marked with export and puts read-only views of them in the modules that import those exported variables. At this point, JavaScript's hoisting takes place, and function declarations and variables are hoisted and allocated. These functions can be called before the module evaluating, but might encounter errors from other variables not being initialized. Linked Graph Will Be Evaluated# Now that the module graph is linked, it is time to start evaluating it. The engine takes a depth-first traversal from the entry module in the order which the import declarations appear in the source text and starts evaluating. If any module throws an error while evaluating, the engine stops evaluating modules and leaves them in the current state of evaluation. How do ES Modules differ from other solutions?# First and foremost, I need to preface this by stating transpilers don't implement ES Modules. They implement a transform of ES Modules syntax to CommonJS semantics and APIs. What I am talking about probably doesn't work the same as a transpiler. New Parser and Evaluation System# ES Modules use a new parser and evaluation system in the JavaScript specification. They automatically make your code have the same rules as "use strict", reserve await as a keyword, and have some changes to how scoping works. ES Modules are a statically linked module system. Unlike CommonJS or AMD, all dependencies must be known and parsed before any user code evaluating. console.log('Hello World!'); // Never evaluates import './doesNotExist'; // Will error import { doesNotExist } from './doesExist'; // Will error Variable Bindings, Not Value Bindings# ES Modules work with variable bindings, not values. Other module systems share values, ES Modules share variables. That means, if a variable is updated, all files sharing that variable see the update. // Every file will see `uptime` change over time export let uptime = 0; setInterval(() => uptime++, 1000); ES Modules are being implemented as asynchronous. CommonJS is a synchronous module system that stops executing code while dependencies load. To be compatible with performance concerns on the web, ES Modules are asynchronous in all future implementations. Due to this, you can have code executing while loading a module graph. It also means that ES Module graphs can be loaded in parallel, even if they overlap. Specifiers are URL Based Strings# ES Modules specifiers are being treated as URL based strings. In some module systems like CommonJS ./hello?world=earth would be treated as a file path. These are now always URLs. ES Modules always evaluate for each URL that is different. That means implementations would always load the file for ./hello but then add the query string to the file metadata. ./hello?world=moon would load a second time after earth! import './echo?msg=hi'; import './echo?msg=there'; // Prints: // > hi // > there ES Modules are idempotent. Within a given source text, import { foo } from "./foo"; will always return the same variable foo. Tools can treat multiple imports are referring to the same variable and it also means that even if someone uses import('foo'); it will return the same set of variables every time. Why to use ES Modules?# Removing build steps. With ES Modules, people can write applications without needing to use a tool like webpack or Browserify. However, browsers are still figuring out how they want to import things like import 'react';; for now, use relative or absolute paths. Code splitting. Having ES Modules be asynchronous and able to load in parallel, module graphs can have multiple entry points that only touch the parts of a codebase that are needed. Enhanced tooling capabilities. Tools like rollup can combine ES Modules with a technique called "Tree Shaking" that removes unused code from a bundle's output. Editors can check if a variable is exported when a developer uses an import since ES Modules use a new syntax. What next?# import() is coming to both Module and Script modes of JavaScript and will allow Modules to be loaded dynamically. A way to get the URL of the Module for a source text is being standardized. Browsers are rolling out the <script type=module> ES Module loader allowing people to start testing ES Modules and figuring out workflows. Tools are landing .mjs support allowing interoperability with both Node and the web. Node is going to expose the .mjs based ES Module loader allowing people to start testing ES Modules and figuring out workflows. What does the future look like for ES Modules and web development in general? Can you see any particular trends?# It looks pretty exciting; there will be a definite transition time while bare URLs are figured out in the browser, and people start using .mjs. I think that one day, we will have development servers that can run ES Modules without any build step, but it is probably ways away. Even in development, people may want to use code transforms for things like JSX or other templating. The web is moving towards a more tooling heavy ecosystem, and that has caused some difficulty. I think that this trend is likely to continue as things like WASM become integrated with JavaScript. Tools should be embraced so that they can be improved to the point where they are not thought about when using them. What advice would you give to programmers getting into web development?# Do not despair! The web is one of the most challenging and complex programming environments out there. There are many ways to do things, so don't be afraid of your code looks different from any other code. Make your code work and enjoy what you have done. Who should I interview next?# This is a bit of a rough one; I would say Caridy Patiño is a good choice. He has a lot of involvement in places like internationalization and TC39. Any last remarks?# Try and stay true to yourself, whoever you are. People can get very heated on technical topics, but don't let them pressure you into anything. Stay open to criticism, listen to others, and become stronger in your beliefs. Conclusion# Thanks for the interview Bradley! I think we live in interesting times and pushing module loading to the browser level feels like one of the last missing bits. It will change the way people think about web development again.

Blue Arrow Awards - Finnish Code Ambassador of 2017

In the beginning, there was a swamp, a hoe, and Jussi. Originally a large part of Finland was swamp and life were hard, but as Jussi worked on the swamp, it paid off eventually. Interestingly you can still see the same pattern. You can always find a problem (your swamp) in which you apply a tool and determination. That seems to be the core of being a Finn and even entrepreneurship. I found my swamp in late 2014 as I commented on Christian Alfoni's blog post about webpack. Since then a lot has changed, and no doubt will change still. The years since have been a period of personal growth, and I'm happy to announce that the efforts haven't gone unnoticed. Finnish Code Ambassador of 2017# I was chosen as the Finnish Code Ambassador of 2017 by Blue Arrow Awards. It's an honorary title I carry with great pride. The win doesn't mean the work is over; maybe it has only just begun. The contest itself is quite new, and it was arranged the second time in a row. Historically us Finns have been more of doers instead of tellers, but the culture appears to be changing. The win wouldn't have been possible without you, the community. It's the interaction and constant feedback that keeps the effort living. I want to especially thank my patient editors Jesús Rodríguez, Pedr Browne, and Artem Sapegin for help during tough times. Conclusion# There are still challenges ahead but it's recognition like this that tells you, you might be on the right track. I hope to grow the effort further and live to my new title of code ambassador. It could be my chance to put Finland on the map. It's a win I consider more of a community win instead of a personal one. During this process, we helped a major web technology to emerge and perhaps we can go a bit further.

async-reactor - Render Async Stateless Functional Components in React - Interview with Sven Sauleau

One common way to deal with asynchronous concerns (fetching for example) in React is to push the problem to a state manager or handling it through life cycle methods. Sometimes that can feel a bit much, though, and a lighter solution would be nice. async-reactor by Sven Sauleau has been designed exactly for this purpose. Can you tell a bit about yourself?# How would you describe async-reactor to someone who has never heard of it?# async-reactor gives you the possibility to render async functional component in React. It has simple and concise syntax using async/await as illustrated below: async function Component() { await asyncA(); await asyncB(); return <html></html> } It's useful at least in the following ways: Code splitting with the import() function (currently stage 3 of the TC39 process). Requesting using the window.fetch() function. Waiting for DOM event (for example using p-event by @sindresorhus). Awaiting asynchronous browser APIs or your logic that returns a Promise. You can find examples in the GitHub repository. How does async-reactor work?# async-reactor is a small library for React implementing the API below: asyncReactor( component: Function, // The `async` component you want to render loader?: Component, // Shown until the first component renders error?: Component // Shown when an error occurred ): Component For better user experience you can show a loading component while waiting for your main component to render and an error component when an error occurs. Components returned by async-reactor are just regular React components, you can use them across your app. How does async-reactor differ from other solutions?# I didn't find an alternative solution to async-reactor. @thejameskyle made an excellent package named react-loadable. It's a higher order React component for remote loading components. You can have an equivalent behavior using async-reactor with a concise syntax and a simpler error handling: async function Component() { const DynamicComponent = await import('./DynamicComponent.js'); return ( <div> <DynamicComponent /> </div> ); } You can add a regular try/catch block around the import and a loader component using the async-reactor API. Why did you develop async-reactor?# I mentioned the boilerplate code that I needed to write to handle an HTTP request. The first solution was a high order component, but it wasn't simple to use. Here is an example of one of my project that I refactored to use async-reactor. Before: import React, { Component, cloneElement } from 'react'; export class FetchIssues extends Component { constructor(props) { super(props); this.state = { isLoading: true }; } componentWillMount() { const issues = import(`../issues-${process.env.LANG}.json`); issues.then((data) => { this.setState({ data, isLoading: false }); }); } render() { const { data, isLoading } = this.state; if (isLoading) { return cloneElement(this.props.loader); } return cloneElement(this.props.children, { data }); } } After: import { cloneElement } from 'react'; import { FetchLoader } from './FetchLoader'; import { asyncReactor } from 'async-reactor'; async function Component({children}) { const data = await import(`../issues-${process.env.LANG}.json`); return cloneElement(children, { data }); } export const FetchIssues = asyncReactor(Component, FetchLoader); What next?# I'm using Preact a lot, and I would like to make an async-reactor version which would work out of the box with it. I was also asked to make a Vue.js version. What does the future look like for async-reactor and web development in general? Can you see any particular trends?# async-reactor relies on the async/await syntax heavily. Major browsers support it already, and with the native support, transpilation and the overhead are not needed anymore. I expect the number of users will increase as the support increases. What advice would you give to programmers getting into web development?# I didn't learn development in school, I learned it by myself. The best advice I could give to a new developer is to read documentation about what they are using, reading books and watching conference videos. Contributing to an open source project is also an excellent way to improve your skills and learn new things. You can find issues for a beginner on contributor.ninja. Who should I interview next?# I have a few in mind: Federico Zivolo (@FezVr4sta) develops a library called popper.js. Since recently Bootstrap uses his library for dropdown, tooltips or popover. Bradley Farias (@bradleymeck) is the one pushing ES modules since over a year. Henry Zhu (@left_pad) maintainer of Babel, he knows a lot about how OSS works. Any last remarks?# If you want, read more about async functions and the await operator. I recommend using Babel's async-to-generator transformation if your platform is missing this syntax. If you are using babel-preset-env, you are set. Conclusion# Thanks for the interview Sven! async-reactor looks almost too easy to use. It definitely cuts down the amount of boilerplate related to performing asynchronous operations on component level. Check out the GitHub project to learn more.

WebpackBin - Webpack Code Sandbox - Interview with Christian Alfoni

Online tools are great for prototyping ideas and even workshops. You avoid the pain of setup while giving up some degree of control. Often this is a good trade-off, though. The interview series covered CodeSandbox, an online React playground earlier. This time around it's time to look into another alternative, WebpackBin by Christian Alfoni. If you have been following the series, you might remember an earlier interview of Christian. That time around we discussed his state management solution, cerebral. Editor's note: WebpackBin isn't online anymore. You can still read the blog post but don't expect the service to work! Can you tell a bit about yourself?# Christian Alfoni For sure! :) I am a 33-year-old JavaScript hacker, developer, OSS enthusiast and due to that an emotional wreck. Living in Trondheim, Norway. I spend my days at Ducky, a startup I joined last year. We are trying to save the planet, and my part in that is making technology choices and informally running a small development team to support the constant changes and ambitions of the company. Related to this I do a lot of open source. The cerebral project, which we have talked about before, is about to hit 2.0 release and has been a great tool to handle the complexity and constant changes in our project. I also have other smaller projects, like marksy, I write articles from time to time on www.christianalfoni.com, and I have built a bin service that uses webpack bundling on the server side. My biggest project these days though is my six months year old daughter, Emma. How would you describe WebpackBin to someone who has never heard of it?# You have probably heard about bin services like JSBin, Codepen, etc. These are certainly excellent services, but the way we write our code has changed a lot the last few years. We transpile pretty much everything we write. The JavaScript code itself to a more modern syntax, JSX, templates, CSS, etc. We do not add libraries as script tags anymore either; we install them from npm. Lastly, we bundle all this together using a build tool, like webpack, using a single entry point. Webpackbin does its best to be a bin service that gives you these features directly in the browser. How does WebpackBin work?# Webpackbin is split up into different services. Webpackbin Client webpack-sandbox webpack-dll webpack-packager Webpackbin Client# The client itself is just a JavaScript application that is served from Firebase and connects directly to it. It is built with Cerebral and Inferno. When the client loads up, it will contact the webpack-sandbox service. webpack-sandbox# The request contains information about the files, npm packages and Webpack loaders that have to be activated on the bin. The webpack-sandbox will spin up a webpack middleware on your personal session based on these details, much like the webpack-dev-middleware. What is unique about this middleware though is that it runs entirely in memory. webpack-dll# If the request contains npm packages, the webpack-sandbox service will make a request to webpack-dll. The request is specifically for a manifest.json file. If the compilation of packages has been requested before, this file is either returned by the CDN or the database of the wepback-dll service. If it is a new compilation of packages webpack-dll will contact one of multiple webpack-packager instances. webpack-packager# The webpack-packager takes the list of packages, runs them with Yarn, uses Webpack to bundle up a DLL bundle, cleans up after itself and responds to the webpack-dll service. The webpack-dll service now puts the bundle into a database and also responds with the requested manifest. A DLL bundle is a JavaScript file that your app file can hook into to load modules, typically packages from npm. Now webpack-sandbox has a manifest it bundles together with your middleware instance and any loaders configured. It responds with an OK, and the webpackbin client will now refresh an iframe also pointing to webpack-sandbox, but now it is a get request. The session is picked up, points to the middleware, extracts the bundled files and injects a script in the index.html to the dll.js file related to the manifest that was requested earlier. The iframe loads the returned index.html which grabs the dll file, your particular bin app bundle and voilà your bin is displayed. Design - The Hardest Part# Now, it was not like I took a shower or woke up one night and had a clear picture of this. There have been many showers and many sleepless nights thinking about and tweaking this. And what I just explained now is the easy part. The hardest part will most certainly come as a surprise, and it is two-fold. 1. Webpack DLL bundles are not as straightforward as you might think They require the npm packages to be installed locally to resolve entrypoints, but there is no local install in the browser. So the way this is solved is by analyzing the npm packages and finding relevant entrypoints beforehand. These entry points are then defined as externals, so your code grabs the correct module when asked. 2. npm packages are a complete mess It feels like every single npm package out there has their little tweak on package.json property names, directory names, file names and what is included in the published npm package. The webpack-packager service has a lot of logic to figure out what should be included, mostly by figuring out what should not be included. Doing this is still a challenge, but it is under control. How does WebpackBin differ from other solutions?# When it comes to features, Webpackbin has a cool LIVE feature. You can just hit a button and share your URL and others can see your code live. You can give control to your participants, and the idea is to provide a teaching tool. Another beautiful thing is that you can download your BIN at any time as a zip file, ready to be npm installed and npm started. It includes the packages, loaders and a webpack config based on that. But where it is rather unique is its ability to configure Webpack loaders. Which means you can use css-modules, Vue templates, handlebars, etc. pretty much any loader that transpiles can be used. That said, this has to be pre-configured on the server, due to the nature of running it in-memory with middleware. But if you can not find your loader a PR is always welcomed. I am not sure about other bin services, but Webpackbin is completely open source. All the parts of the architecture is up for grabs. Why did you develop WebpackBin?# Yeah... why did I... hm. First of all, it was a huge technical challenge, and I had somewhere to start, I had an idea, and the webpack-dev-middleware project would very likely help me produce a proof of concept, which it did. I do not have grand plans with the stuff I do, I just like to follow my obsessions and intuition, and in this case, it resulted in something that could help people... which is the ultimate reward for me. And working on these kinds of projects usually opens up new doors. Like the webpack-dll and webpack-packager service is now being used by Codesandbox.io as well, allowing me to meet and work together with a guy who is just as enthusiastic as me about open source, the community, creating things... Ives makes me feel incredibly old though. Can you believe he is like 20 years old and has built Codesandbox.io? It is pretty crazy. In my defense, and anyone else reading this who is also feeling old now, the web was pretty different 13 years ago. What next?# I worked on a project a while back, using Cerebral, where I was able to record audio and interaction together in the browser allowing you to replay code interactions, pause it, change it, continue where it left off, etc. Webpackbin as a teaching tool# I hope to find some time to implement this into Webpackbin, making it a teaching tool. You can just create a bin, hit the record button, start talking and coding. When opened by others it is completely transparent what is recorded and what you do as a user, I think that could be a powerful thing. npm package analysis# Other than that Ives and I are tightening the screws on webpack-packager, creating a webpage for webpack-dll that shows the status of packagers, instructions to contribute or fire up your set of services, etc. We also want to build a service that analyzes your npm distribution, giving you hints about things you should not have there like tests, the source code, docs, benchmarks, etc. and also naming conventions, package.json properties etc. Editor's note: I hope npm can take the clue and integrate some of these features to npm client itself! We want to help to force some conventions on these things. We are also collaborating on other features that both our services can use. And who knows, maybe a path opens where we can combine forces on all parts of the stack. What does the future look like for WebpackBin and web development in general? Can you see any particular trends?# I am a bit sick and tired of listening to: "Use a type system, prevent runtime exceptions". "Use immutable data structures, prevent unwanted mutations". These are not good arguments in my opinion. When I discovered that immutability gives you a history of state changes which could help me debug my app, or it allows you to verify with shallow checking to optimize rendering that is when immutability made sense. I think the same about type systems. Like runtime exceptions are not the problem. That is not what we spend time on. But I can see self-documenting, safe refactoring and enhanced IDE experience are good arguments for type systems. Managing Increasing Complexity# But my point is that these are not my primary pain points. What I think we struggle with is handling the increase of complexity in our applications. Compare what we did three years ago to what we do now, regarding animations, interaction, data and new technologies. To handle all this stuff we need abstractions. We need abstractions for UI, animations, state management, flow control, etc. Even the technology itself, like CSS, service workers, etc. There are tons of projects working on improving their usability. More Abstractions, More Tooling# So I think we are going to see a massive increase in abstractions to help us handle all this complexity. We are not going to write more code; we are going to write less. We are going to use a lot more tools, and they are going to be visual tools. Just take a look at the significant innovations in dev tools for frameworks, Chrome dev tools, bundle analyzers, etc. I think we have only seen the beginning of this. What advice would you give to programmers getting into web development?# You only need two things: A subscription on www.frontendmasters.com A mentor Some people like to say: "Learn the language". I do not agree with that. I think "Get productive and have fun" is more important. You can be insanely productive without understanding the inner workings of JavaScript, and getting into that stuff will come naturally to you as you take on more and more complex concepts. Who should I interview next?# I think you should get a hold of Amy Knight from JavaScript Jabber. As I understand she is relatively new to programming and I believe it would be interesting, especially here in Europe, to listen to what resources she had available to her and how she got going with programming. Maybe we can learn something over here. We are trying to get to a place where we do not ask questions like "As a woman....", but the state of our community is that we severely lack women, and we need to understand what we are doing wrong. Like my old boss said, you need at least 30% women at the party because then the men behave. Any last remarks?# Well, I guess I should encourage you to check out Cerebral 2.0 which is closing in on release. It is a JS framework that goes head on with handling side effects and has a pretty excellent debugger. Conclusion# Thanks for the interview Christian! I remember using WebpackBin on my early workshops and it was particularly great to see the recent upgrades made to the service. If you haven't tried it in a while, it's worth another look. If you want to check out the source, see webpackbin on GitHub. The service itself isn't online anymore.

SurviveJS Euro Tour 2017 Recap

In this post, I will go through the biggest realizations gained in my Euro tour. I know it's a cliché thing to say, but trips like this grow you as a person. Spending a month outside of your comfort zone leads to realizations you might not otherwise have. Background# It's not the easiest point of life for me as I'm struggling to find a sustainable direction. It's a similar crisis like the one I had early on but on a different level. It's a crisis of identity. Having a business that runs is nice, but maybe there's more to life than that. There's the never-ending battle of finding a good niche for your business while managing the personal side. These two go together, and some balance is needed. You cannot be an all business person and forget about the personal side. I guess in my case the business reflects my personality. No way to avoid that. I saw the trip as an opportunity for a change of pace. The last few months were filled with webpack book related work, and I was fairly tired by the end of it. The trip itself came with responsibilities of its own, but at least it was something different. I ended up giving twelve sessions around Europe within the span of a month. It sounds much, and it is. Interestingly enough I got used to the life of touring by the end and will miss it. Monetarily the trip wasn't a great success, and I ended up making a solid loss, but despite this, I believe it was worth the investment. You cannot measure everything as money. It was the experience gained that was valuable for me, and it is possible I'll cover the losses regarding new opportunities. Half of the trip was sponsored and arranged by Reactive Conf. Thank you for your invaluable support. They also interviewed me. Presentations# I prepared three presentations for the trip: Re-imagining webpack - 15-20 min short talk about my love story with webpack. How I found it, what I did with it, and where it's going. Webpack - From apprentice to journeyman - First half of the book summarized. Webpack - From journeyman to master - Second half of the book summarized. Given I had already written a book and given an earlier presentation about the topic earlier, my challenge was to convert all the information as slides. As it happens, this was hard to achieve. What Went Right# I considered my first week in Oslo as training. In addition to giving a public talk on the topic of "Re-imagining webpack", I managed to arrange a couple of free private sessions (one on one and a group session with a company). Doing this gave me a good chance to benchmark the content against people and improve the content. I managed to find a speaker out of myself by the end of the tour. I enjoyed especially the Vienna session, and we had real interaction going on. I still made a few mistakes, but it was the most relaxed session for me and also the most enjoyable one. The irony is that once you find your pace, it all ends! I think I made the right decision in splitting the main presentation into two halves. The first one took around 45 minutes while the latter was designed around 30 minutes. I implemented a ten-minute break in between the sections early on as otherwise, it's a little much to endure. Another thing that went right was allowing questions within the presentations. I structured the parts of the presentations so that they map to the book and allowed questions after each. I think this is a good way as it gives you a lot of chances to interact with your audience and even clarify your thinking. Sometimes the same question comes by multiple times across different audiences, and you can improve your answer! What Went Wrong# I made the mistake of including too much content on my early presentations. As presentations went by, I dropped more and more points while increasing focus. I did a special version of the "From journeyman to master" set for my last few presentations to keep it terse. It wasn't perfect but enough to give some idea of more advanced scenarios. The slides online contain more ideas than this special edition. I got bitten by my native language. Finnish and English don't go well together, and that leads to problems with comprehension. By the end of the month, I had adapted to English quite well and even struggled a bit with Finnish by then. To resolve this fully, I would have to live abroad as otherwise, I won't get training. It looks like you have to choose between the two. Without a fault of my own, there were technical difficulties along the way. I learned I struggle with traditional mics a lot. You have one hand less to code with and one more thing to worry about adding to the stress. There were also times when the mic just broke, and we had to replace it with another one a few times. I guess the lesson to learn is that if you are serious about touring, bring equipment you trust. Another lesson to learn was that you should always have backup plans when it comes to the presentation itself. On one occasion the HDMI connection just wouldn't work, and I had to use someone else's Windows laptop. Even though challenging, it worked out somehow in the end even though I could not live code. It's good to make sure your presentation material is available online as you never know when you need a backup plan. Sometimes issues beyond your control like these will affect you. Technical problems lead to rush and rush leads to poorer results. Maybe the lesson here is to keep your calm as there's not much you can do about technology above a certain point. Rushing will only make it worse. For this reason, it's a good idea to cut the amount of content you want to present as then there's slack in the schedule. I was interested in having more free sessions in Berlin, but I probably didn't push that enough as there was no response to my initial tweet. But on the plus side, I had time to explore the city and its possibilities. Gnome Trip# The trip itself was amazing. I spent time in a lot of cities and learn about different cultures. Being outside of your culture allows you to understand it better. I understood I've been taking things like clean air, water, sauna, solid internet connection, and honesty, for granted. It's not the same everywhere, and there are always trade-offs. I also understood how expensive place Finland is. Only Norway was worse in this regard, and the other countries were cheaper, sometimes greatly so. A good example is a pasta dinner at Berlin which included appetizer, main course, dessert, and a drink for 7.5 euros. The comparable offering would cost between 20 and 30 euros in Finland while not matching the quality. Oslo What Went Right# When you go abroad, you can forget certain axioms about yourself. Instead of being shy and reserved, as most Finns tend to be, I decided to be open to experiences. Sometimes this meant chatting a bit with another traveler or going to meetups. Doing both opened an entire world to me and gained experiences I would have missed otherwise. How else could you get a private underground tour in Berlin or get to see the quarries of Czech and end the day with a stunning dinner? It seems to me that if you are open to experiences and willing to get out of your comfort zone a little, a lot of magical things can happen. The realization was perhaps the most mind altering of the trip for me. It's easier to build connections once you find a common interest let it be touring, hiking, or something else. It's amazing how deep you can go in discussions when you find the right person. Maybe it's a form of therapy as you get to discuss topics you would otherwise skip in fear of losing face. I'm not the type of person that connects well so it was refreshing to see that I can do it given the right environment and courage. I wasn't much of a subway person before this trip given my home town doesn't have one. This trip showed me how convenient it could be. I was particularly impressed by the systems of Oslo and Prague. Berlin worked too, but it felt complicated in comparison. The trick is to buy a day ticket or longer as this will save time and make your stay less stressful as you don't have to buy tickets all the time. It also gives you a degree of freedom. Google Maps routing worked well in most cities. If you go to Vienna, you should install qando Wien as it works much better than Google's offering. To discover trekking routes, you should use mapy while in the Czech Republic. Investing in good shoes before the trip ended up being one of the better decisions I've made. I bought an expensive pair of Mephistos by chance just to see if expensive shoes make a difference. Apparently, they do. I didn't have any significant problems with legs or feet despite walking roughly 200 km during the month. As a result, I'm in a better shape now and feel like running again as walking doesn't do the trick anymore. I also shed some winter weight while at it. Big win overall. It was great to see nature in Czech and Austria. I can recommend groups like Discovering Prague or Internationals in Wien if you want to explore and meet new people. I'll miss their events for sure. What Went Wrong# I made quite a phone bill at Norway by forgetting to disable data on my phone initially as modern phones are data hungry. On the plus side, I figured out the right way to deal with this problem while in Germany. I bought a prepaid SIM for 15 euros and used it solely for data. Even with 1 GB cap, it was enough for my purposes, and I bought another one in Austria. If you travel in multiple countries, it is important to buy the right SIM. I think the cheaper one was constrained to Germany alone. The EU legislation will change and make this easier. I should be able to use my unlimited LTE connection without any extra costs starting from June. I managed to lose my hat on a plane. I simply forgot to pick it up when leaving. Fortunately, this wasn't a great loss, and it gave me a good excuse to buy a nicer one. In short, anything that's not stored properly will likely get lost. I also had a little pocket issue with certain pants. Wear only pants with good pockets as otherwise, you are asking for trouble. I should have spent more time researching restaurants and places to visit beforehand. Now I relied on tips by locals and intuition but perhaps doing work beforehand wouldn't be a bad idea. Sometimes you find the best places by exploring on your own, though. It would have been a good idea to bring earplugs and a sleep mask to help with sleep quality. I did get sleep, but I could have done better in this department. Berlin Conclusion# Even though not perfect, I think the trip was a personal success. I have a better idea on how to focus my energy and what to do next. Wheels are turning, and you may see me in more significant parts of Europe again sometime soon, but I'll announce the news separately with more details.

Fluture - Fantasy Land compliant alternative to Promises - Interview with Aldwin Vlasblom

Dealing with the asynchronous code has always been a challenge in JavaScript. Callbacks are the classic way, and since then we've gained higher level abstractions and constructs for handling the problem. This time around we'll discuss Fluture, a Fantasy Land specification compatible alternative to Promises by Aldwin Vlasblom. Can you tell a bit about yourself?# That's where I was introduced to PHP and realized I want to pursue an education in this area, which lead me to do a course in interactive media design and development. During my second internship, I became responsible - and was later hired - to maintain the company's internal PHP framework. I loved making API's and abstractions for other developers and was fond of higher order functions. It's, therefore, no surprise that I was drawn into the JavaScript functional programming world, and ended up creating an API which is based almost exclusively on higher order functions. How would you describe Fluture to someone who has never heard of it?# There are three approaches to introducing Fluture, depending on the background: To an inexperienced JavaScript programmer I might say that it's an abstraction which serves to reduce the complexity of dealing with JavaScripts asynchronous nature (also knows as callback-hell). To an experienced JavaScript programmer I might say that it's like a Promise, but lazily evaluated, with cancellation, and with a more principled API which conforms to Fantasy Land. To a functional programmer I might say that it's a Monad which encodes side-effects, delay, and possible failure. How does Fluture work?# In it's simplest form, it's a function which takes a function and returns it wrapped in an object with the name "fork", perhaps best explained through code: const Future = fork => ({fork}); Future((rej, res) => res(1)).fork( console.error, console.log ); // 1 This structure becomes interesting once you add higher order functions like map: const Future = fork => ({ fork, map: f => Future( (reject, resolve) => { fork( reject, x => resolve(f(x)) ) } ) }); Future((reject, resolve) => resolve(1)) .map(x => x + 1) .fork(console.error, console.log); // 2 The idea that you can map over a Future similarly to how one might map over an Array comes from the research into "algebraic data types" brought to JavaScript most prominently by Fantasy Land. Fluture builds on top of these ideas to add: Full conformity to Fantasy Land Monad Other utilities and transformations Cancellation Input type checking And soon: Stack safety How does Fluture differ from other solutions?# The description of Fluture states that it's an alternative to Promises, so it's only natural that people want to compare it. In my article comparing Futures to Promises I write: On the surface Futures are just like Promises, but with the different behaviors of the .then method extracted into three distinct functions, each with a single responsibility. The then method of a Promise is massively overloaded: You can give it zero to two arguments, both are mixed types (Nil and Function). The return values of the functions are also overloaded: You can return any value, but returning something with a then-method has a particular meaning. Throwing an error also has special meaning. Extracting all of these behaviors to separate functions makes it easier to abstract over, and clarifies developer intent, making it simpler to detect mistakes. I've also written about the differences between Fluture and other libraries which have a similar structure, in my article comparing them. I'll go into this when answering why I developed Fluture. Why did you develop Fluture?# I got into functional programming some years ago when I discovered Ramda, and from there I became acquainted with Fantasy Land. The first algebraic type I decided to try was a Future. I tried data.task and Ramda Fantasy. A little later I was teaching asynchronous functional programming to a small group of developers, and I found that one of the biggest sources of confusion were the bizarre and cryptic error messages one would get out of these Future libraries from making simple mistakes. I had also accumulated a set of common utilities that I was using with Futures, so I decided to create a Future library which would ship with these utilities and provide understandable error messages. I made sure that the performance remained decent because I did not want people to have to choose between good performance and a pleasant development experience. I later discovered Sanctuary, with which Fluture shares a lot of its design philosophy. It became another important part of Fluture's design to integrate with Sanctuary nicely. To learn more about Sanctuary, read the interview of David Chambers. What next?# A critique Fluture and other Futures have always received is that they are not stack safe, unlike Promises. Promises execute every action immediately in the next JavaScript tick giving them an inherent stack safety, because every operation waits until the stack is cleared before executing. Some weeks ago, by combining ideas from Promises, Fluture, and Free Monads, I created a stack safe proof of concept Future which does not use the next-tick-trick. I'm currently working on porting the entire Fluture library to this new architecture. It's already feature complete - it just needs some polishing before being released under version 6.0 in the coming months. What does the future look like for Fluture and web development in general? Can you see any particular trends?# Fluture has earned a place in my personal toolkit when it comes to classic request-response applications, like those you find in web-servers. In this context, I consider it the best solution to the async problem (and the promise problem) that I've used to date, and I don't think that will change soon. As for other kinds of applications, like the ones you might find running in browsers, I think we are moving towards reactive. Streams are the perfect async abstraction in these environments. Streams are like Futures, except that they can produce more than one value. For an excellent Stream library I recommend most. And for an interesting way to use it, and think about front-end applications, I would recommend learning about CycleJS. To learn more about CycleJS, read the interview of André Staltz. What advice would you give to programmers getting into web development?# Avoid using object and variable mutation as a feature for the functionality of your code. You are shooting yourself in the foot. Mutation is a means to optimize code. Who should I interview next?# I admire the works of Brian Cavalier, author of most, creed, and more. I also think it may be good to interview Irakli Safareli, he has been an invaluable contributor to both Sanctuary and Fluture, and he's been exploring the little-explored field of Free Monads with his project Free. Lastly, I would like to give a shout-out to Roman Pominov who helped me bring cancellation into Fluture. He authored Kefir - the first Reactive Stream library I got into, and Static Land - an adaptation of Fantasy Land which pushes the community forward. Any last remarks?# I think a wildly under-appreciated feature of Monads is Monad Transformers, I've scratched the surface of what they are capable of in my project momi, which implements the core ideas of Express in only a few lines of code by combining two existing Monads. I would like to see their usage grow. Conclusion# Thanks for the interview Aldwin! It is always amazing to see new solutions to old problems. Sometimes reframing the problem somehow can lead to alternatives. Check out Fluture GitHub page to learn more about the project.

CodeSandbox - Online React Playground - Interview with Ives van Hoorne

Getting started with React can be daunting especially if you want to understand the entire setup. Solutions like create-react-app have hidden a lot of this complexity. But there's more to it. CodeSandbox by Ives van Hoorne pushes the problem online. Instead of setting up a React project each time you want to experiment, you can use his service instead. Read on to learn more. Can you tell a bit about yourself?# University of Twente and a part-time developer at Catawiki. I worked there full-time last year, at that time I was responsible for converting the website to React. Though I like all kinds of programming, I've been especially attracted to frontend the last few years, mostly because it's also a bit artistic. I get a lot of satisfaction from building user interfaces that people both find beautiful and easy to use. How would you describe CodeSandbox to someone who has never heard of it?# CodeSandbox is an online editor for web development projects. It automates things like transpiling, bundling and dependency management for you so you can easily create a new project with a single click. The editor also has a live preview so you can see the results of your work while you type. Sharing is very easy; you can just share the URL of your project or embed it in an iframe. Others can then fork it to edit the project to their liking. CodeSandbox currently has a focus on React projects, which means that it supports features like downloading to a create-react-app template. This is an example of a project on CodeSandbox, it's the classic TodoMVC example in Redux: How does CodeSandbox work?# CodeSandbox at its core consists of two parts: the editor and the preview. The editor is the whole CodeSandbox application (file manager, code editor, dependency settings) and the preview is the result you see on the right. These two parts are very decoupled and only communicate using postMessage. The preview is on a subdomain (sandbox.codesandbox.io) in an iframe to literally 'sandbox' the preview away from the main application. Editor vs. preview The editor sends all its files, directories and dependencies to the preview; this either happens when the user changes something or when the application loads. The preview then takes all these files and processes each type using different loaders, which currently is either CSS, JavaScript, JSON, or HTML. These loaders can be very simple, the JSON loader, for example, is only a one-liner: export default module => JSON.parse(module.code); The JavaScript loader is by far the most interesting since this loader also has to transpile, require and cache the result. It first transpiles the code using babel; then it evals the transpiled code with a stubbed require function. This require function just takes a path, checks if this is an npm dependency or a file and handles it again with the loader for that extension. Every result is cached, so most of the time only the edited file is evalled again after a change. The output of the loader goes through a boilerplate, this boilerplate is determined by the output. A boilerplate simply is a separate file that does something with the loader output, for example, the boilerplate for a returned React components is: import React from 'react'; import { render } from 'react-dom'; // domChanged is a boolean which specifies if the module // has done something to the DOM while it was evaluated export default function(evaluatedModule, domChanged) { if (!domChanged) { const node = document.createElement('div'); document.body.appendChild(node); render( React.createElement(evaluatedModule.default), node ); } } This boilerplate renders the output of a React component to the DOM if it doesn't change the DOM at all. I want to make it possible for others to build and share loaders/boilerplates as well, but this requires some thinking because we still want to support create-react-app interoperability. The npm dependencies are handled by a separate server; I call it the 'bundler'. The editor sends the list of dependencies to it, the bundler then creates a UMD build of this combination using webpack 2 and sends an object containing the URL and the manifest back. A manifest is an object with a mapping from dependency name to module number, so the JavaScript loader knows which module to load from the UMD build when a dependency is required. How does CodeSandbox differ from other solutions?# CodeSandbox is one of the few editors that supports npm dependencies and multiple files/directories. It also handles almost everything in the browser, which allows us to show real-time feedback without any server communication. That is a feature-wise difference, but I think the real difference compared to other editors is the goal. We want to make it possible to let others import your sandbox as a dependency. This way you can't only edit others work, you can use it in your projects. The feature hasn't been fully implemented yet; I still need to finish the UI for it. Why did you develop CodeSandbox?# I started to think about CodeSandbox last summer when I was on holiday in St. Ives. Several colleagues asked me questions about the React project we've been working on, but there was no easy way for me to answer them. The questions were either related to a library or were so complex that it was very hard to show in, for example, Codepen. That's when I started thinking: 'man, it would be great just to have an online editor that could do this'. I began working on this in my spare time and eventually, my friend Bas Buursma joined me. What next?# I'm currently working on more support for users and sharing. Specifically, I'm building the profile view right now; here you can showcase your sandboxes and see statistics like how many times your sandboxes were forked and how many views you got. It also includes a very requested feature: deleting sandboxes. Deleting is currently impossible and also very irritating, I have 38 sandboxes right now, and I would love to delete the junk ones. This is the current design for the new profile view: Profile view design After we have better support for searching and sharing sandboxes within CodeSandbox, I can work on 'import as library'. I'm excited about that feature and would love to build it sooner; it's just that I first need to build the foundation for it. I'm also exploring ways to use your editor for building things on CodeSandbox; I'm looking at things like setting CodeSandbox as git remote/Github integration or making it possible to sync local files. Syncing is still very vague and unexplored though, so nothing is sure on this. What does the future look like for CodeSandbox and web development in general? Can you see any particular trends?# In this dynamic world, it's very hard to speculate on what direction we're going. I think that React and other web application frameworks, like Vue, will gain a lot more mainstream adoption this year. I've seen a big increase of interest in Vue and many companies are moving to React lately. The interest automatically means that more people will go to CodeSandbox, either to learn, try something out or to build an example for asking a question. My big question is if people will use CodeSandbox to build projects on, either to share or to start something serious. So to summarize it: I think web frameworks, and as a result CodeSandbox will grow a lot, the big question for CodeSandbox is what direction this will be. What advice would you give to programmers getting into web development?# Try not to get overwhelmed! That's easier said than done, so if you do get overwhelmed by a task, it's smart to divide it into smaller, more manageable sub-tasks. Take it step by step. I also recommend learning by just starting a personal project. Building something that you like and can share is a great motivation, and that motivation helps to overcome so many hurdles along the way. Who should I interview next?# Christian Alfoni, the creator of WebpackBin (now defunct) and Cerebral. It has been a blast working with him. He is close to releasing a new version of a state controller called Cerebral. Editor's note: I interviewed Christian earlier about Cerebral. Any last remarks?# I learned React via SurviveJS! Great book and helped me a lot with understanding React. Conclusion# Thanks for the interview Ives! It's nice to see services like this to appear as they take so much pain out of the process and enable quick experimentation. Maybe one day web development goes to the web entirely. Be sure to give CodeSandbox a go.

Scrimba - Interactive Screencasts Created in an Instant - Interview with Per Harald Borgen

YouTube and the web are filled with screencasts. They provide a great way to learn difficult concepts as you can see in practice how something specific is done. This is the way I learned to use Blender, a 3D suite, in the past. It all made sense after I saw how to use the application. When it comes to coding, the challenge is that you have to literally type it all out yourself if you want to replicate the results. Scrimba has developed a solution that could change all this. Per Harald Borgen can tell more about it. Can you tell a bit about yourself?# Scrimba. We're an Oslo-based startup consisting of three co-founders: Sindre, Magnus and me. Our goal is to make online learning better than in-person learning, starting with programming. At the core of this is Scrimba - an interactive video format for explaining and understanding code. I myself became a professional developer in 2015, a process I've written extensively about on my blog. How would you describe Scrimba to someone who has never heard of it?# Scrimba is an interactive video format for communicating code. It makes the experience significantly better for both the creator and the viewer. The easiest way to understand Scrimba is to watch the 1 minute screencast below: As a viewer, you can pause and edit the code at any given time. So if you're struggling to understand something, just hit pause, jump into the environment and play around with the code until you understand it properly. Scrimba also makes the creation experience much less frustrating, as we remove all the hassle involved with creating coding screencasts. No more setup, edit, encode, upload and re-encode. Just code while you talk and then publish it instantly. How does Scrimba work?# We record the underlying events instead of pixels. When replaying a Scrimba screencast, we recreate exactly what the creator did. This opens up a whole new world of possibilities for interactivity, creation, responsiveness, search, and recommendations. We've only begun scratching the surface of what we can do with this format. How does Scrimba differ from other solutions?# Compared to traditional video, Scrimba has the following benefits: Much easier to create Interactive (viewer can pause and edit code) 1% file size of video Better mobile experience (because of responsiveness) Indexable/searchable Why did you develop Scrimba?# Scrimba was invented because Sindre needed to document his programming language Imba. He first tried creating traditional video tutorials, but became increasingly frustrated with the creation process. What would have taken him two minutes to explain in-person often took him an hour to convey through video. So he began building Scrimba as a tool for people to learn Imba. However, it soon became clear that this could be used for much more than just Imba. Just think about it: if you want to explain code online today, you're stuck with either text or video, both of which are cumbersome compared to explaining in-person. What if you could combine the easiness and quality of in-person teaching with the global scale of the web? That's what we want to do with Scrimba! What next# We're going to continue to lower the threshold for people to create content, so expect it to become even easier to create Scrimba screencasts. We're also working on building a community around Scrimba.com. What does the future look like for Scrimba and web development in general? Can you see any particular trends?# I think the amount of developers (not just web) in the world will continue to grow, as software is still eating the world. Also, the skill of coding will be more mainstream, as more and more kids are exposed to it at school. At Scrimba, we want to be a part of this by empowering anyone to easily tech code to others. We aim to become the best place online to teach and learn technical subjects. What advice would you give to programmers getting into web development?# Getting into web development - and JavaScript in particular - can seem intimidating, given all the hot new tools/frameworks/libraries you seemingly need to learn. I'd say don't worry too much about that in the beginning, and rather focus on the essentials. Once you know the essentials, you can learn any tool you want. Who should I interview next?# Keith Horwood of Stdlib. He's basically creating the standard library for the internet, which is really awesome. The easiest way to create, distribute and discover web services. Any last remarks?# Thanks for interviewing us, and keep up the great work! Conclusion# Thanks for the interview Per Harald. Scrimba looks cool to me and there's a fair amount of screencasts at Scrimba site already!

SurviveJS - Webpack - v2.0 - Results and Errata

Enough time has passed since the major release of the webpack book so it's a good time to evaluate how well it went. As no release is perfect, I've been pushing smaller patches to the content and I cover the fixes later in this post. The fixes are minor but they were still worth doing. Results# The paperback has sold roughly twenty copies in two weeks. The Kindle edition is close to thirty and the higher quality and more expensive hard cover edition has close to ten interested buyers which just might be enough for me to do a little print but we'll see. In addition, the Leanpub edition reached two thousand readers but it's good to remember half of those received the book for free when I split up my first book. Based on these results it's safe to say the release wasn't a great financial success. I'm particularly happy that I found a good writing model, though. Especially the last few months were great as significant progress was made and this bodes well for the future. The book is stronger in many ways than the initial "Webpack and React" one and I have a solid writing process in place now. I know what kind of books to write from now on. There's still a lot of work to do, some related to the webpack book. Even with poor sales it's worth doing as it adds more streams to the whole. As they say, a lot of small streams form a big river. Errata# I've listed errata per version below so you can see the main changes. You can see them all through GitHub compare changes view. 2.0.2# Renamed Splitting Bundles chapter as Bundle Splitting to be consistent. Linked to JSPM in the comparison appendix. Added missing const webpack = require('webpack'); to the Bundle Splitting chapter example. Improved wording related to disabling package consumption related warnings. Noted that generating a single bundle over many is more performant as discussed in karma-webpack issue 23. Made Istanbul exclude test files from coverage results. 2.0.3# Added missing file names to the i18n chapter. Clarified i18n chapter ESLint configuration. 2.0.4# Changed entry name from demo to lib at the Bundling Libraries chapter. 2.0.5# Simplified imports at the Extending with Loaders chapter to avoid a linting warning. Noted that PhantomJS doesn't support ES6 features yet so it requires preprocessing in order to work. Added missing path import to Karma configuration example. 2.0.6# Mentioned that HMR setup needs to be done before implementing Hot Module Replacement with React at the appendix. Cleared up webpack process image a bit so you can see entry can be something complex. 2.0.7# Simplified Searching with React appendix example so that it works with HMR out of the box. The old example used binding method that doesn't work well with react-hot-loader. Fixed a typo at the Minifying chapter - that → than. Made sure recordsPath receives an absolute path as webpack enforces this now. Mentioned that PurifyCSS doesn't work with CSS Modules yet. 2.0.8# Fixed a typo at the Glossary. an a → a. 2.0.9# Recommended cheap-module-source-map over cheap-module-eval-source-map since it's a more stable option. Included naming to import() syntax given that's supported now. Dropped devServer.open: true bit as redundant. Noted that CSS Modules work with purify if you use a whitelist. 2.0.10# Pushed CommonsChunkPlugin to production configuration to speed up the build. Dropped useSourceMap flag from parts.minifyJavaScript. Added missing const webpack = require('webpack'); to HashedModuleIdsPlugin example. Made CSSNext example use require('postcss-cssnext')(),. Made autoprefixer example compatible with the newest version. It needs require('autoprefixer')() over require('autoprefixer') now. Simplified stylelint configuration by dropping ignoreFiles since webpack include is used already. 2.0.11# Add missing .babelrc to a filename in the Loading JavaScript chapter. Added purify-css as a dependency to install as purifycss-webpack requires that now as a peer dependency. 2.0.12# Linked to Predictable long term caching with Webpack. Toned down records statement. 2.0.13# Fixed the book to use cheap-module-eval-source-map over cheap-module-source-map for development. 2.0.14# Changed browserslist to .browserslistrc as the project has changed the file name. 2.0.15# Added missing const path = require('path'); to Extending Loaders chapter. Added missing formatting to const { RawSource } = require('webpack-sources'); at Extending Loaders chapter. 2.0.16# Changed PATHS.app to PATHS.lib at the Bundling Libraries chapter. 2.0.17# Simplified a devtool related paragraph to pass Amazon margin check. 2.0.18# Getting Started - Use import over require at text. Composing Configuration - Fix sentence structure at "Composition-based approach..." Conclusion# There's still a lot of touring to be done so things will move slower than usual. That said, I'll try to get most out of this experience and convert that into something good. Traveling is good for ideas.