Is working with Python Environments a barrier to Python use?

Vanilla Installs
What is everyone complaining about?
I only need one python ever!

Oh I got app 1 installing dependencies over app 2 because I’m installing everything to the system python and I have to run as admin or sudo pip install everything!

Virtual envs
python -m venv venv
Pycharm’s venv creator
venv wrapper
activating with activate.bat and dot sourcing
looking at PATH to figure out why nothing works
adding PYTHONPATH for things that for some reason can’t be in the venv

Oh, but I will never need another python.
Yeah right, python 2 and 3 everywhere, 2 often unavoidable.
Pypy can solve perf problems
You probably don’t want to migrate to 3.n to 3.n+1 because libraries take a while to catch up so you’re switching back and forth
Oh, you’re going to stick with 3.1 forever? Pip just installed the latest version of a lib that doesn’t support 3.1, do you know how to pin your dependencies?
Oh you found a bug that is fixed, but now you need to move away from 3.1, etc.
So now you need pyenv, pywin and or tox. Tox does cross python testing, pyenv/pywin do interpreter switching

Ah, my versions are pinned, I have no more problems
Dependency A depends on B 2.1 and Dependency C depends on B 2.3. Pip out of box provides no clues why everything is broken
Thanks okay, I’ll use pipenv. Now I can get a graph to see where the transitive problems are and get warnings when they happen. Also, I can separate out development dependencies so that the dependency graph for a production release is simpler.

No more problems, yes sir.
“Can’t find FORTRAN compiler” – Yes, some common data science library have FORTAN dependencies.
Oh, easy, I just won’t do datascience, just Postgres. “Can’t find C++ compiler. Can’t find Python.h. Can’t find”…. Oh, I guess I need to install tool chains. Or learn to install wheels. Or realize that if you have a lot of native dependencies, maybe you should use conda.

No more problems? Everyone on the team did the exact same thing, identically. All easy now?
Well actually they didn’t, all of the above are slightly different for each OS.
But lets imagine everyone did have the same workstation
The build environment runs in docker, often in a linux distro with most dependencies.This is so reliable, why didn’t we just use docker all the time? It is very, very slow if you’re writing to it all the time.
The production environment runs in docker often a small distribution like alpine, except many ordinary packages lack the corresponding apk to do installs
VMs and Vagrant are often too slow.
And finally, you generally need a bunch of bash scripts to make this run in production. To get them to work on windows, you’ll need gitbash, or mingw64, or cygwin, all of which behave differently
But what about Ubuntu on Windows (WSL?) – Until WSL2 is release, performance is really bad for anything involving IO, but some people can put up with it. Also not supported by free Pycharm, AFAIK.

Version Incrementing on Multiple Branches

“Increment source or tag version by 0.0.1 at end of build script”
If you have main, branch 1, branch 2, and a build script that wants to increment the build number after each successful build, you’ll have race conditions.

Simultaneously, dev 1 and dev 2 check out their respective branches. They build, using the latest tag on main. When they push, they push the same version number, or if they push and tag main, the second developer will get “tag already exists”

“Version Server”
If a version number server handed out numbers, then each developer could reserve a version number, say 1.0.11. But if dev 1 gets a version number first, but pushes second, or even months later, his version number is out of order (e.g tag 1.0.10 comes after 1.0.11)

“Version Master”
Alternatively, no versions are bumped on branches. A separate process or person, “the Version Master” periodically tags the main branch with a version number and on that commit, doesn’t do anything else, except increment the version number in source and add a tag.

But what is a version and what does this buy us over changeset ids? Let’s assume that passing builds are less common that commits, so it would make sense to bump a version or build number for each passing build.

“Version Master” with Flagging
We could have the build script put a flag into a change set like “GIMME_A_VERSION.txt” or and once that file makes it to main, then count up the existing version tags and bump the tag. Downside, the source doesn’t know its own version number without checking git. So the Version Master would have to see a request for a version bump in the source and then do a bump of the version in source, tag it with the same tag and push it.

This system could miss version bumps if two passing builds arrived in rapid succession, you’d only be able to push one version number in source, you wouldn’t be able to insert a bumped version number inbetween commits.

This is so hard.

WFH: Work From Home

I’ve a usually WFH job and now I’ve got enough experience with it to have opinions, so here is the good, the bad, the ugly.

The caveat here is I don’t work for Github, a company famous for a functioning, remote workforce. I work somewhere where only formal communications go out by mailing list, MS Teams/Slack has zero adoption, etc.

The Good.
Generally less commute, less time in the car.

You can potentially, finally get a quiet office with a door.

The Bad, The Ugly.
Organizational onboarding just never happens. You never get the introductions to anyone, so you never develop the informal connections. You are free to cold call people in the organization, but odds are, you don’t even know who to cold call. This is true for everyone above you, below you or at the same level as you in the organization chart. I’m guessing the organization is running on the informal connections made before telework got started, or those rebels that chose to stay at the office despite widespread telework.

You wake up every morning not sure if you need to head into work. Onsite meetings generally get canceled minutes before they start.

Because you are an offsite worker, you don’t get even a cubicle assigned to you at the actual office building, you get to use the public computer lab. So every time you have to go pee or eat lunch, you have to pack up your entire workspace or just hope no one is going to pilfer your stuff in the computer lab or give it to lost & found.

My quiet home office with a door is actually just a bedroom with a door. The lawnmowers & garbage men are outside my window everyday, loud enough that folk on the phone can’t hear me.

Mentally switching to and from “work mode” doesn’t happen. I either get stuck in “at home” mode or “at work mode”, usually, “at home” mode, which is a mess for productivity.

The main tool for communication is Outlook, so there is no equivalent of a “chat room” where you can post a question. There are only mailing lists of 50 to 150 people. I never see informal communication on these.

Work from home will fall short if the organization isn’t Github & there isn’t the equivalent of slack & if there isn’t a formal onboarding policy that makes sure that a new employee actually interacts with people. Putting the new hires in a room of 250 people who they won’t even be working with & the meet 35 people in 35 minutes as you walk around the office at random doesn’t count– that shit is what pure-onsite orgs do when they know they don’t have to get onboarding right because everyone you need to work with is going to be within 10 feet of your cubicle & the relationships will form naturally.

In otherwords, work-from-home probably destroyed Yahoo.


Why you should work at Burson-Marsteller on my team

THIS JOB DOESN’T EXIST ANYMORE. The company doesn’t have this name anymore.

The job app for an opening on my team went up a few days ago (March 2017), here:

It’s in downtown DC, close to a Metro stop. We plan to find a mid-career developer and pay mid-career developer rates.

What’s are we doing?
The job is mostly backend development, Ubuntu-Nginx-Postgres-Django. So UNPD or DUNP or UPDN or something like that. It’s a hybrid of data work, application development, data science and a cloud component. We use AWS with a variety of server and service types. Developers are learning more about how to do server admin in order to deal with AWS and server administrators are learning more about application development, a phenomenon called DevOps*. (*because I say so. aficionados think DevOps is a cultural shift in IT departments, too.) We are down to one on-premise server and that’s the way we like it.

What’s the team look like?
We are doing what I call a sincere scrum. You may be more familiar with the “yeah-right” software development process, where someone dictates SDLC and the team and management say “yeah right, we’ll do that”. Part of getting team members autonomy is having some clarity around roles. At the moment, we have a clear Product Owner (my supervisor), a Scrum Master (myself), a server administrator and a front end developer.

So what the heck is a scrum master? By the agile textbook, I’m in charge of facilitating the daily status reports, curating the ticket database, but I don’t primarily choose what features we implement. The product owner decides what features are going to make the team money.

Scrum (and especially ‘Extreme Programming’) have some opinions about practices, such as the need for unit testing, build servers, source control etc. I’m also the build master, which means, we have a build server & a build script that automatically compiles, lints, and runs tests. If you haven’t worked on an application with a suite of unit tests, it makes maintenance development far less stressful. New bugs and breaks are found fast and can be fixed fast. Build script driven development also means when it comes time to do testing with live humans or code review with live humans, time is spent on higher level concepts instead of nuisance bugs.

The team is distributed between DC and NYC. We don’t mind if team members work a day a week at home.

What are the cool technologies?
On top of a cake of data work and application development, for frosting, we get to work with social media APIs, a huge multinode Redshift database, REDIS, data science libraries like scikit-learn, pandas and so on. Many of our app’s components do huge data pulls that feed datascience reports. The cost of data and the cost of statistical prediction has come way down in the last few years and we are exploiting that to do AI-like or magic-like predictions and analysis.

So throw your resume into the pile and our HR department will get to me:

NB: I don’t speak for Burson-Marsteller. I just happen to work there and we have a job opening on our team.

Adapting a template for a resume

For various reasons, I’ve become way to familiar with the technologies associated with creating resumes. For the record, the coolest are: CV Maker, LaTeX resume templates, JsonResume and just about anyone’s HTML5 resume template. The HTML5 templates are written by people who actually have artistic taste, so they look beautiful. No way could I do the same in a short time, so I bought a template. (Never mind that the Template store’s UI let me buy the wrong template before I bought the right one, let’s focus on the happy parts of this experience.)

To use it for my self, I had to:

Assemble the raw material for a resume. StackOverflow Careers is my ground truth for resume data. From there I copy it to USAJobs and so on.

Load it up in Intellij Visual Studio with Resharper is not too bad, but if you just use Intellij, you get all the goodness that Resharper was giving you and more.

Disable the PHP mailer A contact form is just a spam channel. Don’t ask me why spammers think they can make money sending mail to web admins (unless maybe it’s spear phishing). I considered not showing my email address, but the spam harvesters already have my email address and google already perfectly filters my spam.

Strip out the boiler plate. Every time you think to got it all, there is more references to John Doe.

Fix the load image. The load image was waiting for all assets to render before it would remove the annoying spinner and curtain. But the page did not have any elements that the user might interact with too early. The page didn’t have any flashes of unstyled content like you see with Angular. There weren’t any UI element suddenly snapping into place on the document ready event like a certain app I’ve worked on before.

Deminimize the code. This should be easy, right? A template has to ship with the JavaScript source code. But the code was minimized. So I pointed the page to the non-minimized version. The whole page broke. Finally I noticed the minimized file actually contained version 1.2 and the non-minimized shipped was 1.0. So I deminimized the code and could begin removing the extraneous load image.

Upload to my hosted Virtual Machine . Filezilla all of a sudden decides it can connect, but can’t do anything. Some minutes later, I figure out that TunnelBear, my VPN utility and Filezilla don’t play well together. So I added an exception for my domain.

Write blog post. I just wanted my resume to be a nice container. But as a developer, it sort of looks like maybe I wrote this from scratch. I certainly did not.

How to do postbuild tasks painlessly in Visual Studio

So, you’d think you’d need to write custom msbuild targets or even learn msbuild syntax. Wrong!

(okay, some of this can be done via the project properties, the “Build Events” tab. The “Build Events” tab hides the fact that you are writing msbuild code and now you have batch file code embedded into a visual property page instead a file that can be check into source control, diffed or run independently.)

You will have to edit your csproj file.

1. Create a postbuild.bat file in the root of your project. Right click for properties and set to “always copy”. A copy of this will now always be put in the bin\DEBUG or bin\RELEASE folder after each build.

2. Unload your csproj file (right click, “Unload project”), right click again to bring it up for edit.

3. At the very end, find this:
<Target Name="AfterBuild">
<Exec Command="CALL postbuild.bat $(OutputPath)" />

The code passes the \bin\DEBUG or \bin\RELEASE path to the batch file. You could pass more msbuild specific variables if you need to.

Strangely, the build output window will always report an error regarding the first line of the batch file. It will report that SET or ECHO or DIR or whatever isn’t a recognized command. But the 2nd and subsequent lines of the batch file run just fine.

From here you can now call out to powershell, bash, or do what batch files do.

What a Build Master should know or do as a Build Master.

An automated build is beautiful. It takes one click to run. The clicking on the run button is completely deskilled. It can be delegated to the cat.

Setting up and troubleshooting a build server on the other hand is unavoidably senior developer work, but I encourage everyone to start as soon as they can stomach the complexity.

Does it compile?

A good build pays close attention to the build options as a production release will have different options from your workstation build. If it builds on your machine, you may still have accidentally sabotages performance and security on the production server. Review all the compilation options.

In the case of C#, the rest of the build script is a csproj file, which is msbuild code, which is executable xml. You don’t need to know about how it works until stuff breaks and then you need to know enough msbuild to fix it. Also, because the build server sometimes doesn’t or can’t have the IDE on it, the msbuild script might be the only way to modify how the build is done.

The TFS build profile is written in XAML, which again is executable XML. Sometimes it has to be edited, for example if you want to get TFS to support anything but ms-test. Fear the day you need to.

Technologies to know: msbuild, IDEs (VS20XX), TFS GUI, maybe XAML, possibly JS and CSS compilers like TypeScript and SASS

Is it fetching source code correctly, can it compile immediately after check out to a clean folder?

When there are 50 manual steps to be done after checking code out before you can compile, the build master must fix all of these. Again, it builds on the workstation, but all that proves is that you have a possibly non-repeatable build.

Maybe 90% of the headaches have to do with libraries, or nowadays, repo managers, like nuget, bower, npm, etc. A sloppy project makes no effort to put dependencies into source control and crappy tools means the build server or build script is unawares of the library managers.

Technologies to know: tfs, nuget, bower, npm, your IDE

What is “good” as far as a build goes?

A good build server is opinionated and doesn’t ship what ever successfully writes to a hard drive. Depending on the technology, there isn’t even such a thing as compilation. Those technologies have to be validated using lint, unit tests and so on. These post build steps can either be failing or non-failing post-build tasks. If they don’t fail the build, then often they are just ignored. Failing unit tests should fail a build. Other failing tasks, probably should fail a build, even if they aren’t production artifacts. I usually wish I could fail a build on lint problems, but depending on the linter and the specific problems, sometimes there just isn’t enough time to address (literally) 1,000,000 lint warnings.

Technologies to know: mstest, nunit, xunit, and other unit test frameworks for each technology in your stack.

Who fixes the failing tests? Who fixes the bugs?

The build master, depending on the organization and how dysfunctional it is, is either completely or partially responsible for fixing the build. There is no way to write a manual for how to do this. Essentially, as a build master, you have to dig into someone else’s code and demonstrate they broke the build and are obliged to fix it, or quietly fix it, or what ever the team culture allows you to do.

Technologies to know: nunit test, debugging, trace

We got a good build, now what? Process.

Not so fast! Depending on the larger organization policies with respect to command and control, you may need to get a long list of sign offs from people before you can deploy to the next environment. Sometimes you can have the build serve deploy directly to the relevant environment, sometimes it spits out a zipped package to be consumed by some sort of deployment script. Usually though, the build server can’t deploy directly to production due to air gaps or cultural barriers.

Technologies to know: Jira or what ever issue tracker is being used.
Non-technologies to know: your organizations official and informal laws, rules and customs regarding deployment.

The Grand Council of Release Poobahs and your boss said okay, now what?
This step is often the most resistant to automation. It often has unknowable steps, like filling in the production password, production file paths and IP addresses.

MsBuild supports no less than two xml transformation syntaxes for changing xml config for each environment.

It may be advisable for environments you know about to do enviornment discovery. It’s either wonderful or an easy way to shoot yourself in the foot. When you know the target server is a Windows2008 Server and on such servers it must do X and on Win 7 workstations it must do Y, don’t forget to think about the Windows 10 machine that didn’t exist when you wrote your environment discovery code. Maybe it should blow up on an unknown machine, maybe it should

Technologies to know: batch, powershell, msdeploy, MS Word
Non-technologies to know: your organizations official and informal laws, rules and customs regarding deployment.

Optional Stuff

Build servers like TFS also have built into them bug trackers, requirements databases, SSRS, SSAS (Analysis Services), and build farm things. They are all optional and each one is a huge skill. SSAS alone requires the implanting of a supplemental brain so you can read and write MDX queries.

Also, optional, is learning how other build servers work. No single build server has won all organizations, so you will eventually come across TeamCity, Lunt Build, etc.

Features worth searching for:
Log-by-level. E.g. info, warn, verbose, error.
Log-by-module/theme. E.g. MyClass, file1.js, Data, UI, Validation, etc. Sometimes called “groups” or other things.
Log-errors. Info, warn, verbose are all the same data type, but the error is a complex data type and the work flow differs dramatically from the others.
Log-to-different places, e.g. HTML, alert, console, Ajax/Server
Log-formatting. E.g. xml, json, CSV, etc.
Log-correlation. E.g. if you log to 2 places, say a client, server and web-service and db, and a transactions passes through all four, can you correlate the log entries?
Log-analysis. E.g. if you generate *a lot* of log entries, something to search/summarize them would be nice.
Semantic-logging. E.g. logging (arbitrary) structured data as well as strings or a fixed set of fields.
Analytics. Page hit counters. (I didn’t search for these)
Feature Usage. Same, but for applications where feature != page
Console. Sometimes as a place to spew log entries, sometimes as a place for interactive execution of code.

Repository Queries: 25 entries right now. Nuget JS logging libraries. – Nodejs Repository query

Various Uncategorized Browser-Centric Libraries category logger (here they are called “groups”) category & level logger. colorful logging by category Side console. – Serilog/Structured Log Log4JavaScript – for people who like the log4x API. As of 2015, appears dated & unmaintained. More upto date log4x library. Fancy on screen log. A level-logger. (Many features are IE only) Console logger with module filters. Client side logger that sends events to popular server side logging libraries (more than just server side node) on screen (HTML) logging overlay Monkeypatches the built-in console object.

Microlibraries for Browser
These might not be any smaller than other libraries. – supports local storage logging. 4 level logging with an on/off switch at runtime. – 4 level logging with environment switches & ajax 4 level logging Supports plugins. console.log wrapper Same as JSN Log, but just the JS part, so it’s like a microlibrary. – Polymer/web-component style console logging – Polyfill for pretty-console display? – Build step to remove console.log entries before sending to production.

Abandonware -Abandoned? Not sure what it does.
NitobiBug -Abandoned. If you look long enough you can find websites that serve up the file.

Browser plug ins – Firefox centric.

Error Logging
Error logging is more than a print statement. Generally, at point of error you want to capture all the information that the runtime provides. Stacktrace and more. Roll your own

Node Loggers (might work in Browser, not sure) (Node Centric) (Node Centric) (Node centric) Bunyan

Commercial Loggers
Often a opensource client that talks to a commercial server. No idea if these can work w/o the server component.

Dumb Services – Accessing a website when there isn’t an webservice API

The dumbest API is a C# WebClient that returns a string. This works on websites that haven’t exposed an asmx, svc or other “service” technology.

What are some speed bumps this presents to other developers, who might want to use your website as an API? The assumption here is that there is no coordination between

All websites are REST level X.
Just by the fact that the site works with web browsers and communicates over HTTP, at least some part of the HTTP protocol is being used. Usually only GET and POST, and the server returns a nearly opaque body. By that I mean, the mime type lets you know that it is HTML, but from the document alone, or even from crawling the whole website, you won’t necessarily programmatically discover what you can do with website. Furthermore, the HttpStatus codes are probably being abused or ignored, resources are inserted and deleted on POST, headers are probably ignored and so on.

Discovery of the API == Website Crawling.
If you could discover the API, then you could machine generate a strongly typed API, or at least at run time, provide meta data about the API. With a regular HTML page, it will have links and forms. The links are a sort of API. You can craw the website and find all the published URLs, and infer from their structure what the API might be. The Url might be a fancy “choppable” Url with parameters between /’s or it might be an old school QueryString with parameters as key value pairs after the ?.

You can similarly discover the forms by crawling the website. Forms at least will let you know all the expected parameters and a little bit about their data types and expected ranges.

If the website is JavaScript driven, all bets are off unless you can automate a headless browser. For a single page application (SPA), your GET returns a static template and a lot of JavaScript files. The static template doesn’t necessarily have the links or forms, or if it does, they are not necessarily filled in with anything yet. On the otherhand, if a website is an SPA, it probably has a real web service API.

Remote Procedure Invocation
Each URL represents an end point. The trivial invocations are the GETs. Invocation is a matter of crafting a URL, sending the HTTP GET and deciding what to do with the response (see below.)

The Action URLs of the forms. The Forms tell you more explicitly what the possibly parameters and data types are.

Data Serialization.
The dumbest way to handle the response from a GET or POST is a string. It is entirely up to the client to figure out what to do with the string. The parsing strategy will depend on the particular scenario. Maybe you are looking for a particular substring. Maybe you are looking for all the numbers. Maybe you are looking for the 3rd date. There in the worst case scenario, there is nothing a dumb service client writer can do to help.

The next dumb way to handle a dumb service response is to parse it as HTML or XML, for example with Html Agility Pack, a C# library that turns reasonable HTML into clean XML. This buys you less that you might imagine. If you have an XML document with say, Customer, Order, Order Line and Total elements, you could almost machine convert this document into an XSD and corresponding C# classes which can be consumed conveniently by the dumb service client. But in practice, you get an endless nest of Span, Div and layout Table elements. This might make string parsing look attractive in comparison. Machine XML to CS converters, like xsd.exe, have no idea what to do with an HTML document.

The next dumb way is to just extract the tables and forms. This would work if tables are being used as intended- a way to display data. The rows could then be treated as typed classes.

The next dumb way is to look for microformats. Microformats are annotated HTML snippets that have class attributes that semantically define HTML elements as being consumable data. It is a beautiful idea with very little adoption. The HTML designer works to make a website look good, not to make life easy for Dumb Services. If anyone cared about the user experience of a software developer using a site as a programmable API, they would have provided a proper REST API. It is also imaginable to attempt to detect accidental microformats, for example, if the page is a mess of spans with classes that happen to be semantic, such as “customer”, “phone”, “address”. Without knowing which elements are semantic, the resulting API would be polluted with spurious “green”, “sub-title” and other layout oriented tags.

The last dumb way I can think of is HTML 5 semantic tags. If the invocation returns documents, like letters and newspaper articles, then the elements header, footer, section, nav, or article could be used. The world of possible problem domains is huge, though. If you are at a CMS website and want to process documents, this would help. If you are at a travel website and want to see the latest Amtrak discounts, then this won’t help. I imagine 95% of possible use cases don’t include readable documents an important entity. Another super narrow class of elements would be dd, dl, and dt, which are used for dictionary and glossary definitions.

Can there be a Dumb Services Client Generator?
By that, I mean, how much of the above work could be done by a library? This SO question suggests that up to now, most people are doing dumb services in an ad hoc fashion, except for the HTML parsing.

  • The web crawling part: entirely automatable. Discovering all the GETs, and Forms is easy.
  • The meta-data inference part: Infering the templates for GET is hard, inferring the meta data for a form request is easy.
  • The Invocation part is easy.
  • The Deserialization part: Incredibly hard. Only a few scenarios are easy. At best, a library could give the developer a starting point.

What would a proxy client look like? The loosely typed one would for example, return a list of Urls and strings, and execute requests, again returning some weakly typed data structure, such as string, Stream, XML as if all signatures where:

string ExecuteGet(Uri url, string querystring)
Stream ExecuteGet(Uri url, string querystring)
XmlDocument ExecuteGet(Uri url, string querystring)

In practice we’d rather something like this:

Customer ExecuteGet(string url, int customerId)

At best, a library could provide a base class that would allow a developer to write a strongly typed client over the top of that.

Using Twitter more effectively as a software developer

FYI: I’m not a technical recruiter. I’m just a software developer.

Have a clear goal Is this to network with every last person in the world who knows about, say, Windows Identify Foundation? Or to make sure you have some professional contacts when your contract ends? Don’t follow people that can’t help you with that goal. If you have mixed goals, open a different account.

Important Career Moments Relevant to Twitter. Arriving town, leaving town and changing jobs, conferences, starting a new company– if you have a curated twitter list, it might help at those time points, or it might not, who knows.

At the moment, there are so many jobs for developers and so few jobs, that the real issue is not finding a job, but finding a job that you like. Another issue is taking control of the job hunting process. The head hunters most eager to hire you, have characteristics like, they make lots of calls per day and they have a smooth hiring pipeline. But there is no particular correlation with what sort of project manager is at the other end of that pipeline.

Goals: Helping Good Jobs Find Developers I’m talking about that day when your boss says, hey, do you know any software developers? And I say, no, I work in a cubicle where I talk to same 3 people 20 minutes a week. So that was a big part of my goal for creating a twitter following, so that in 3 years, bam, I can say, “Anyone want a job?” and it wouldn’t be just a message in the bottle dropped in the Atlantic. If you don’t care about the job don’t post it. If a colleague desperately needs to fill a spot for the worlds worst place to work, don’t post it, you’re not a recruiter, you got standards.

Twitter is a lousy place for identifying who is a developer and who is in a geographic region. After exhaustive search, I found less than 2000 people in DC who do something related to software development and of those, maybe 50% are active accounts. There must be more developers and related professions then that in DC– I guess 10,000 or 20,000.

Making Content: Questions. It works for newbie questions. Anything that might require an answer in depth is better on StackOverflow. And StackOverflow doesn’t want your easy questions anyhow.

Making Content: Discussion. It works for mini-discussions, of maybe 3-4 exchanges, tops. Consider doing a thoughtful question a day. Hash tag it, but don’t pick stupid hash tags, or hash tag spam. #dctech is better than #guesswhat Consider searching a hash tag before using it. Re-use good hash tags as much as possible to increase discussion around a hashtag.

Making Content: Jokes. It works really well for jokes. Now if you actually engage in jokes, that is a personal decision. They are somewhat risky. On the otherhand, if you never tells a joke, you’re a boring person who gets unfollowed and moved to a list.

Making Content: Calls to Action. I don’t practice this well myself because it’s hard to do in twitter. Most effective calls to action are some sort of “click this link”, hopefully because after I read the target page, I don’t just chuckle or say, “hmm”, but I do something different in the real world.

Making Content: Don’t do click bait. Not because it isn’t effective, it is effective in making people click. But everyone is doing it and it is junking up news feeds.

Building a Community: Who to Follow? Follow people you wish worked at your office. They may or may not post the content you like, but you can generally fix that by turning off retweets. If they still tweet primarily about stamp collecting, or tweet too much, put them on a list, especially if they don’t follow you back anyhow.

Building a Community: Finding people to Follow Twitter’s own search works best– search for keyword, limit to “people near me” and click “all” content.

Real people follow real accounts, usually. Real people are followed by 50/50 spambots and real people. Unfortunately, people follow stamp collecting and cat photo accounts, but are followed by friends, family and coworkers. If you are looking for industry networking opportunities, you care about the coworkers, not the stamp collecting and cat photo accounts.

Bio’s on twitter suck. People fill them with poorly thought out junk. I don’t care who you speak for, I don’t care if your retweets are endorsements. Put the funny joke in an ephemeral tweet, not the bio, followers end up re-reading your bio over and over. Include where you live, your job title and key works for what technologies you care about. Well, that’s what I wish people would do, but if you really want to put paranoid legal mumbo jumbo there, at least make sure that it aligns with your goals.

Building a Community: Getting Follow Backs. People follow back on initial follow, and sometimes on favorite and retweet.

Building a Community: Follow “dead” accounts anyhow. They might come back to life because you followed them. Who knows? It’s a numbers game.

Interaction: Retweet or Favorite? Favorite, means, “I hear you”, “I read that”, “I am paying attention to you”. Retweet means, “I think everyone of my followers really cares about this as much as they care about me.” People get this wrong so much I generally turn of retweet on every account I follow. I can still see those retweets should an account be on a list I curate.

Retweet what everyone can agree on, Favorite religion and politics. If someone says something you like, it’s a good time for engagement. But not if it means reminding everyone that follows you that after work hours, you are a Republican, Democrat or Libertarian. Favorites are comparatively discreet, the audience has to seek them out to find our what petition you favorited.

In practice, people Retweet when they should Favorite, junking up their followers news feeds with stamp collecting, radical politics, and personal conversations.

Interaction: Do start tweets targeted at one person with the @handle. It prevents that message from showing up in your followers feeds. Don’t automatically put the period in front, most people are gauging wrong when to thwart the build in filter system.

Know Your Audience. I have two audience, my intended audience of software developers in greater DC, and my unintended audience people who follow me because they agree with my politics, or are interested in the same technologies as me. I have a clear goal, so I know that the audience I’m going to cater to is the one that aligns with my goals. I can’t please everyone and if I wanted to, I would open a 2nd account.

Lists: Lists are for you. Don’t curate a list with the assumption that anyone cares. They don’t. Consider making lists private if you don’t think the account cares if they’ve been put on a list.

Lists: Create an Audience List The people I follow are great, but the people that follow me back are better. I put them on a private audience list because they don’t need a notification hearing that I’ve put them on an audience list.

People on my general list that don’t follow me back, I hope they will follow me back someday. The people on the audience list, I care about their retweets and tweets more because it’s just much more likely that I’ll get an interaction someday.

Lists: Create a High Volume Tweeter/”Celebrity” list. People who tweet nonstop junk up your feed, move them to a list unless they are following you back. “Celebrities” have 10,000s of followers but only a few people they follow. They probably won’t ever interact with you, but if they do, it will be via you mentioning them, not through a reciprocal follow relationship.