Archive for the ‘Work’ Category

CSS inliners

Tuesday, August 13th, 2019

At the moment, I’m working with a company where I’m handling their email templates. One tool I found to be very useful are CSS inliners. I won’t be discussing Inky templates in this post.

I’ve been using and for the most part, it works perfect. The only down side is that I have to visit the url, paste into the text area, click a button, wait for the page to reload, copy, and the paste the inlined html. It was okay at first, but by the 100th paste, it gets stale quick. That’s when I started looking into CLI options.

The tech explosion has been great, providing a lot of inlining CLI tools I wouldn’t have had otherwise. Everyone wants to make a name for themselves on the interwebs, and I can relate.

That being said, I’ve went through pretty much all the NodeJS one, and the only one I found comparable to campaign monitor’s is “inline-email” located here:

Example command:

inline-email index.html –noInlineImages | pbcopy

The other solution I found acceptable is the python based premailer. It’s pretty much just like inline-email, except it gives you the option to not strip the style tags. I did find that useful in some occasions in which the element you wanted to affect didn’t exists until rendering (sometimes clients inject things on their own). Premailer can be found here:

Example command:

python -m premailer -f index.html | pbcopy

If anyone is having issues getting their emails to look correct across email clients either phone, web, software based, you should give the aforementioned tools a try, definitely made my life a lot easier.

Perfection Paralysis

Tuesday, February 5th, 2019

I’ve spent more than a decade in professional software development. I’ve seen and fought this demon time and time again. The demon’s name is Perfection Paralysis.

It’s quite the beast. When you have so many eyes reviewing and critiquing your code, you want to be the best you can possibly be. You want your code to be representative of all that you’re capable of, because that’s what you’re judged by.

I think this is why I find writing my own hobby code so relaxing. No code reviews, no tech designs, no architecture reviews, and planning meetings and etc. I simply let the thoughts flow from my mind to my fingers; code happens, the world is changed.

I warn of the dangers of perfection paralysis because there is no single solution to solving a problem, there’s in fact, probably an infinite ways to solve it. No matter which way you go, there is always room for criticism and improvement. You can make it faster, you can make it neater, you can make it more extensible, you can make it more testable, and the list goes on.

I think one factor that a lot of developers don’t take into account, is how much more work they’re subjecting themselves and possibly their organization to.

Your code looks great, but did you do Doc Block commenting? Your code works, but is it type strict? Your tests passes, but does it handle mutation testing well? You have test coverage, but is it 100% test coverage, or partial coverage? Your test passes 100%, but does it pass for all device, or just a single device? Your application works now, but will it work once the load increases by a hundred folds? So on… and so on…

I find myself in this situation ALL the time, both in the professional environment and the non-professional environment. Great, this JS code works, but should I have done it in React? Great, it works in React, but should I have incorporated Flux/Redux? I finished it in native PHP, but should I have used the Laravel Framework? I did it in Laravel, but maybe I should have done this in GoLang? Great, I’ve coded it in GoLang, but maybe I should have coded it with Gorilla? The data persistent layer is in MySQL, but maybe I should have used DynamoDB? The list goes on and on and on, it will NEVER stop, because there’s simply no stop case, this is simply the nature of our work, there will never be a best, because the best is yet to come.

I think remembering the BIG picture helps me maintain my sanity. Reminding myself of the objective allows me to focus on running past the finish line instead of being crippled by anxiety at the starting line. In the end, code is code. It’s simply a series of instructions to a computer, a persistent mechanism, and presentation mechanism. No matter how we’ve changed things, those things stay the same. In the end, no matter how you code it, if it fulfills the objective, it’s working code.

I think a concept that a lot of developers forget in the software development life cycle, is that code becomes obsolete. In this RAPIDLY changing world, should be optimize for 10 years? 5 years? 3? 2? 1? 6 months? That’s a tough question, but let me answer it with another one. If you wrote optimal code using ES 5, is it still optimal under ES 6? If it was optimal ES 6 code, is it still optimal ES 7 code? If you coded it using React, could material UI have done a better job? If you used jQuery, and now you have all these built-in selectors to javascript, is it still the optimal approach?

I urge myself, and other developers to optimize for the situation as it stands, in their own life, in their environment, and future proof it to a reasonable extent. Architect it for the current and PLANNED next steps, but I wouldn’t optimize it any further than that.

Advance the code and project in an interactive, self-rewarding manner, rather than a giant hunk perfection that is either ALL or NOTHING, a long ways into the future from now. When you have 1 or 0, the expected value is 0.5, but if you allow yourself to build everything in an incremental manner, it’ll be much less stressful, and you’ll have more fun.

So for myself, or anyone else reading this… “It’s okay, it’ll be fine, even if there might be a better way, just do it the best way for the current circumstances, when the circumstances change, much like the Monty Hall paradox, we can pivot then”.

Useful command to test speed of a container, vm, or system

Monday, March 6th, 2017

I’ll be breaking down the following command part by part:

time dd if=/dev/zero of=test.dat bs=1024 count=100000


What does time do? It runs a process and then captures how long it took to execute.

What about DD? Well, it’s a command that copies data from a standard input to a standard output.

What about the params if, of, bs, and count?

“if”: It’s decently obvious, but “if” specifies the input, in this case we’re taking input from a special file that provides as many null characters as there are read from it; an infinity file of sorts.

“of”: It’s the output file.

“bs”: Byte size

“count”: the number of blocks

So all together, the command writes 100,000 blocks of 1,024 bytes of binary zeroes into the file of “test.dat”. In other words, generates a 100 MB file. This command allows you to generate a 100 MB file and test the  IO performance of a system. As we move towards a world we’re optimizing the crap out of everything, this is a very useful command to know.

Amazon S3 Outage

Tuesday, February 28th, 2017

Today’s post is regarding

These type of occurrences are becoming more and more common. Tons of company has placed a ton of faith into the Amazon ecosystem, and time and time again, it looks like Amazon has let them down. When these things broke, it broke at a MASSIVE scale (AWS outage knocks Amazon, Netflix, Tinder and IMDb in MEGA data collapse, )

There were other outages in 2012, 2013, and probably more unlisted. I think it’s an interesting challenge that Amazon is tackling, and I feel like more and more of the web is putting all of their eggs into one giant basket.

I wonder, if we were to build a truly scalable, and unlikely to be impacted system, maybe it might make sense to diversify the system’s infrastructure to utilize multiple services. Maybe some redundancy at the DNS layer, then some more at the LB, some more at how things are replicated, localized and so on… Just something to reflect on due today’s outage, “How can I prevent my organization from being impacted by this?”

Hybris vs Magento

Friday, September 26th, 2014

“We’re on Magento, but we need to upgrade to Hybris!”

“Nothing is true, everything is permitted

I went to and and I took a look at two companies, and then did a benchmark on the two companies. Which of the following do you think is the “better” version?

bloom Oakley


The slower loading one is actually Hybris. The faster one is Magento. People are often quick to dismiss languages, technologies, and softwares. I say nay! Try to figure out things first before you throw all those “extra screws”. It’s important to do a cost-benefit analysis on MANY fronts.

Don’t buy into hype. Too much of this world is built upon inefficiencies. Do understand that often times, interests conflict, what is in your best interest isn’t in their best interest.

Hybris is built in JAVA, JAVA has many pros, but one of the cons is that developers are hard to find, and it’s not exactly the fastest to code on either. Magento is built on PHP, many cons, but one of the pros is PHP developers are plentiful and projects can be built quickly and often times, very cost-effective.

Just understand that the more complex and inaccessible your environment, the harder it is to scale it. You’ll run into issues into many forms of scaling issues, whether it’s code, load, or human-capital. Switch to a solution only after carefully assessing the pros and cons of it, this choice MUST be made extremely carefully because the impact of this decision is extremely far reaching. Also understand that simply because certain things are “best practices” doesn’t necessarily mean that it’s the “best practice” for your company and situation.

Only one way to do things? The cat would disagree

Friday, September 26th, 2014

I had a discussion with an industry peer today, regarding databases. Two conclusions he arrived at, which are right, but also wrong. One, “strings have no business being in a SQL statement”, two, “IDs have no basis being in a mapping table”. From a peer data storage and efficiency perspective, you’re correct, but from a practically perspective, you’re wrong. The statement about IDs being in a mapping table, from a peer database perspective, you’re correct, and from a real-world perspective, you’re wrong.

Strings have no business being in a SQL statement

The point of readability is to provide the ability to deduce, at a glance, as much information as reasonably possible. So lets say we have the following database table structure:



How would you query all the articles of a section? My response is:

JOIN SectionArticleMap SAM ON S.idSection = SAM.Section_idSelection
JOIN Article A ON A.idArticle = SAM.Artcile_idArticle
WHERE Section.Title = ‘Name’

The only response he thinks is acceptable is:

SELECT * FROM SectionArticleMap SAM
WHERE Section_idSection = 1
AND Article_idArticle = 1

He claimed a string has no place being in a SQL statement, he believes there’s only one correct way, and I’m sorry, but he’s wrong. He favors IDs because it’s immutable, and he believes they will remain longer, which is true, but if you look at categories, they’re represented in names, and not IDs. In a sea of SQL statements, I would have to do a lot of grunt work to figure out exactly which section the statement is tied to, if I wanted to re-use, I’d have to figure how which ID to replace it with. The prior allows me to easily figure out the section and re-use the query. The section is called “Name”, and if I need to re-use the statement for another section, I simply change the name.

I’m not saying the prior is THE CORRECT way of doing things, nor am I claiming the later is the INCORRECT way of doing things. What I’m claiming is the strong statement that ‘such things have no business being in a SQL query’ is wrong. The prior is clearly easier to understand than the later. I know at a glance that I’m fetching articles for a section titled “Name”, the later, I’ll have to do some additional queries, and if the titles aren’t maintained in the DB, but in the code, then some code diving, and if the DB structure somehow became unsynced with the code, then some nightmares are due to follow. There are pros and cons for every approach.

IDs have no basis being in a mapping table

I basically add an ID to all tables now and days for cross-platform compatibility. I informed him during my time as a professional developer, I’ve come across scenarios that merited an ID being in a mapping table, in which he countered, that he’s been working professional for 25 years and there is never a case for an ID column in a mapping table, and anything requiring it is just crap code. It appears that during his time he might not have dealt with the need for many different codebases to interface with the same database, or at the very least, not CakePHP. “By convention the ORM also expects each table to have a primary key with the name of id" (

From a database perspective, it’s very easy to say that the ID as a primary key takes up unnecessary space, and is bad practice, but once you factor CakePHP into the picture, then having an ID IS the best practice.

Is CakePHP crap code? I personally don’t think either CakePHP or any software built on-top of it is crap code, there are always room for improvement, but without understanding the rhyme or reason of why things are the way they are, I’m hesitant to claim things as broken.

I’m not a big fan of people with high-technical responsibility being extremely closed minded. Certain solutions aren’t ideal for one-case, but might be ideal for another, which is why in academia, you’re going to hear a lot of “it depends”. People whose lives involve wisdom and learning, often time know that there’s never a clear-cut answer for everything, and everything depends on other factors, why then, is the world so littered with single solution answers?

Managerial Assessment

Thursday, July 3rd, 2014

It’s that time of the project again, something went wrong, and a goat needs to be sacrificed. As a person who is often found to be in charge of projects, I hold my bosses to the same standards as I hold myself, and underlings. If something goes wrong, the problem goes from the bottom, all the way to the top of the chain.

In a simplistic example, assuming there is a dev team, a team lead, a CTO, and then the CEO. If the project fails apart, and there is a firing decision, the CEO MUST have a team debrief. Every single member involved needs to write in their own opinion, what happened. Sure, a project could’ve failed because someone on the bottom didn’t know what they were doing, but at the same time, isn’t it the team lead’s job to make sure they knew? Then isn’t it the CTO’s job to make sure the team lead’s on task? Isn’t it the CEO’s job to make sure the CTO is capable of such actions?

Fact of the matter, incompetence happens at all levels of a corporation and company. Just because there is a scapegoat doesn’t mean the issue has been taken care of. You have a termite infestation, you’ve killed a termite, but the infestation still exists.

As a CEO, you should gather data on various people’s perspective on what the issue is, and formulate  your own decision. You have to get a perspective of how things are looking down, and then another perspective of how things look like from below. Just like a game of “communication”, if you don’t know what message the very end received, then you don’t know the message was corrupted along the way, in fact, unless you investigate the “nodes”, you won’t even know where and when things got corrupted. Not debriefing is like allowing your ship be sailing through iceberg ridden waters, without checking for icebergs.

Scapegoating will buy bad management time between the current SNAPFU and the next, but if you were to catch the manager in the act of scapegoating, you can prevent yourself from losing some very talented individuals (human capital), at the same time, preventing the bad manager from gaining power. Think about it this way, once the bad manager sets the tone that anyone who disagrees with his horrible management style will get fired, who will correct his actions? A strong IT company needs to be built on allowing talent, innovation, and best practices to flourish. Allow bad managerial nodes will create a chilling effect, which will ultimate hamper your IT team, and ultimately your business.

As a CEO, debriefing, exit interviews, and what not, are the least you can do. As a board member or investor, I’d expect them to do at least this much. Even the highest level, there is such an assessment, so why wouldn’t you think that as a CEO, you can afford to simply take management’s word for things? Even auditors are brought into the picture from time to time. To improve, you must assess, progress without assessment is most likely just bull excrement.

www subdomain or no www subdomain

Monday, May 19th, 2014

This topic is a very old an ancient topic, but I’ve arrive definitively at whether or not the main domain should have www, or not. The answer is “it should”.

The reason being, is that a cookie set at the domain level, exists for all subdomains. If you have subdomains, or ever plan to have subdomains in the future, it’s best to use “www” subdomain for your main site. It’ll pay off by saving you some headaches down the line when you have specialized subdomains in the future (blog, beta, members, etc.)

Project Constraints and Project Selling

Tuesday, March 4th, 2014

There are 3 things you can control about a project, time, resources, and features. Of the three, you at best can control 2.

Which is why I propose for projects to have the following creation and definition flow:

  1. Feature gathering
  2. Resources / Budget constraints
  3. Time / Delivery constraints
  4. Project planning, project options, packaging, pricing
  5. Investigation
  6. Execution


  1. We want to under to undergo feature gathering first, because how can you size something with unknown dimensions?
  2. We want to know what are the budgetary constraints to the project, since we limited resources, we’ll have limited options, and we’ll have to live with the consequences of having limited resources.
  3. We want to know when the project needs to be delivered by, often times, if it’s something that needs to be rushed, then over time might be necessary, or perhaps more resources.
  4. This is where we plan the project and price out the project. I think it makes sense to allow the client to control 2 of the 3 factors of project planning. Once we know what the client is willing to give up, then we can go ahead and structure the deal around it. I’m sure with the resource allocation any venture can be profitable.
  5. During this phase we need to figure out exactly what is entailed with the project, and whether or not we can properly take on the project.
  6. During this phase we basically get it done.

This flow seems to make sense to me, if a person knows a better flow, let me know because this is the flow I’ll stick with for now.

Sell Reputation

Wednesday, February 26th, 2014

The Greek philosopher Aristotle divided the means of persuasion, appeals, into three categories–Ethos, Pathos, Logos. Today, we’ll talk about ethos.

When you’re trying to persuade a customer that your product is worth more than another person’s product, you will invoke one of the three. Substantial investment will be made mostly on the logos and ethos front. In modern day-terms, logos will be data, reports, forecasts and etc, whereas ethos will simply be reputation.

I believe perception and reputation is a good form of investment, because if you want to convince your customers that your product is worth more, you’ll have to employ one of those 3 methods. You can logically convince a customer, and they’ll provide you with logical prices.

The key is to focus on the illogical. Beauty is in the eye of the beholder, and your evaluation will be based on what they perceive. Sometimes, it can be lower, sometimes it can be accurate, and sometimes it can be higher. The fact that it can be higher gives you a great opportunity to capitalize on the differential.

You can sell logic, or you can sell reputation, and even empathy. If you had to choose one, I’d think reputation would have the most potential for irrational profits.