On Technical Selling and Technical Discovery

An oft-repeated tenet of technical selling or even software selling in general, is ‘sell on value’. It’s a sentiment I agree with one hundred percent. It gets me out of ‘software shill’ category in the minds of my prospective customers and towards ‘trusted advisor’ territory. But to do that, I first need the chance to actively listen to their problem Then I can demonstrate that I understand their pain, I can quantify it, and that I care about the customer’s outcome. Once I'm knowledgeable about my customer’s challenges, if all has gone according to plan, I even come equipped with a solution.

The first mistake I see folks make in the buying process is that a customer - or occasionally a seller - wants to skip the step I call technical discovery. To understand the pain of a customer, I, like any technical seller, first need to listen. Any good technical seller is one part therapist, one part technical geek, one part business consultant. I listen, I apply technologies and processes, and only then can I consultatively suggest solutions, at least some of which I’m hopefully selling.

If there was one concept I would want my prospects to understand, it’s that a demo doesn’t offer them the full value of what technical sellers bring to the table. For my part, as a seasoned technologist, I’ve seen some stuff in my day. I’ve seen at least a few Fortune 500 companies running on Excel spreadsheets passed back and forth through email. I’ve written hundred lines long logic trees to clean up messy customer master data. I’ve seen customer master tables sitting idle in a database because nothing maps to it. I’m here to help and I have a plan.

Of course, I want to show customers my product in a demo! I get tingles of excitement every time I do a demo. But more importantly, I want to see and understand their specific challenges. I want to hone in on the problems I can solve with you and ultimately to help build a business case and pitch that case to stakeholders for solving those problems. As technical sellers, we’re here to help our customers be successful - we know the optimum outcome is when we all win together. The best of us are incredibly invested in the success of our customers and we can only maximize our true value to our customers when we’ve started by understanding a customer’s pain.

Adventures in Marketing: The Highlights

A few years ago, in a foray outside my normal sphere of influence, I spent a few months working cross-functionally to build out an elaborate Executive Marketing Dashboard for the Marketing team at Seagate. 

First, some context: In the past, the Marketing department's leadership has driven their monthly meetings through PowerPoint presentations.  Much of the normal work for analysts across the department would grind to a halt while data was pulled, groomed, graphed, and compiled into slide decks.  Leadership was looking to move to Tableau to stabilize the subject matter, track to consistent goals, and drive the business through an automated set of instruments.

A recent copy of the Executive Marketing Dashboard

I was lucky enough to have some spare time at work around the holiday season of 2016, and while browsing the jobs list for a friend, I noticed an intriguing job description for a Marketing Analyst.  The post's technical needs were consistent with my capabilities, but the experience within marketing was lacking.  That, and I really enjoy my current job.   On a lark, I reached out to see if I could share how my team does what it does, in exchange for some hands on experience into the marketing world.  And that's how I found myself as the technical owner of Seagate's Executive Marketing Dashboard. 

Know your Audience, NOt just your Data!

Whenever you're building a dashboard, it's crucial to understand both your data and your audience.  There's a relationship there, between the two, and it will emerge in a properly designed dashboard. 

In my experience, executive management generally needs to stay at a high level view of their business.  In the case of the EMD, the need to stay high level was emphasized by the sheer number of topics getting presented within a relatively short forum. 

So rather than designing the dashboard as a drill-into-detail dashboard, this was serving to smoothly transition management from a static, PowerPoint presentation into a new world of interactive analytics.  The requirements I was given included some strict guidelines: No filters, no drilling.   Management wanted it to essentially be a one-pager, with 10 different visualizations based on 10 different data sets, all on the same page.  Right off the bat, this means every viz has to be crafted with an attention to dramatic spacial constraints: each one was going to get only about 300 x 400 px worth of room.  Fortunately, since filters and parameters take up space, these requirements weren't at odds with one another. 

Do not adjust your screen

This causes text to scale differently.

For better or worse, management tends to skew towards both towards farsighted folks and owners of fancier Windows computers with higher resolutions, which tends to mean they use a higher DPI scaling setting.

Enter: the Long Form View.  Each viz in the main dashboard is reproduced on a longer form dashboard, and thus are given ample room to breathe, solving for both Window's scaled up fonts and the 50+ crowd that forgot to bring their reading glasses to the meeting.

EMD's Long View, with buffer and goal parameters visible, giving mgmt the flexibility to tell their own story.

Choose your own ending: Buffers

One benefit of presenting the vizes in two different ways, I was able to sneak in a bit of clever instrumentation that I call buffers.  If you build a set of calculations that finds the minimum and maximum values on a chart, and then add a "buffer" constant to them, you can sneak in a hidden reference line which has the subtle effect of re-calibrating the scale of the axis it is built upon. 

buffer implementation explained.png

So, if normally your line chart looks as jagged as the Rockies, you can alter a parameter  that drives the buffer constant (I guess it's a variable now) to scale out the axis such that the peaks and valleys are no more thrilling than the rolling hills of Ohio.  Now, I know this isn't scientifically sound, tinkering with the axis like this, but remember, we're working for Marketing today, not Engineering!

Like I said, you gotta know your Audience!

Show a lot of data in a little space

The biggest visualization challenge I had was how to display all the data for the Amazon Marketing vizes.  I had two categories of advertising, AMS and AMG, which had their own vastly different scales, their own goals, spend, revenue, and the relationship between revenue and goal. So right off, they need to be separated. 

green and red in tooltip.png

Because there was so much to track, I needed to find ways of keeping the rev-to-goal obvious, without being overwhelming.  Since the most important factor is "did we make goal", that point is emphasized in redundant ways.  With the color scheme implemented in three ways, combined with the check/x symbols, it is crystal clear which quarters met goal. 

At that point I still hadn't shown spend relative to goal beyond a pass/fail, so I added goal lines based on a monetized goal.  The goals are multiples of spend, so I built a calculation based on parameters.  Then I drew goalposts using reference lines.  In this way, viewers can also easily see how well we did relative to goal.

 

 

 

Getting to know Marketing data

I spent the overwhelming majority of my time getting to know - and automate - the data sets involved in this dashboard.  The data sets are diverse enough in origin and type that most merit their own blog posts.  I'm quite proud of my work on this project because not only did I accomplish the primary goals of building a viz tailored for my audience, but the data sources are automated and in all cases, such automation had never been achieved within Seagate.  No one in Marketing, to my knowledge, had automated Google Analytics into Tableau, and no one had ever automated Sysomos and iPerception data into Tableau using their respective APIs.  This aspect - blazing a trail, proving out and sharing these huge quality of life improvements for my fellow analysts, that has been immensely satisfying to me.  The weeks and months since have been dedicated to training up my fellow analysts on how to take these lessons and expand them to all their day-to-day reporting.

A few highlights from that adventure:

For the entire dashboard, the goal was an automated dashboard that pulls nine different metrics from six very different data sources:

  • Click Thru Rate via Google Analytics

  • Time on Site via Google Analytics

  • Sessions via Google Analytics

  • Amazon AMS Spend, Goal, & Revenue via Google Sheets

  • Amazon AMG Spend, Goal, & Revenue via Google Sheets

  • Master Marketing Budget via Excel on a shared network drive

  • Social Media Followers via iPerception

  • Internet Community Brand Sentiment via Sysomos API.

  • Email Marketing Engagement and Purchasing via SFDC and Business Objects

Under Pressure - a look at the US healthcare system one month into the Covid 19 pandemic

Like everyone else, coronavirus has been disruptive in ways I’d never seriously imagined. And yet, having lived for the past few years in a remote corner of Silicon Valley, life is not quite so different than my norm. I’ve always socialized primarily through virtual means, so my dance card has been largely unaffected. I was already getting groceries delivered as the norm, just to cope with the rigors of raising two young boys.

But I recognize the mathematical scale, the near inevitability of a pandemic, once mature, of what’s about to wash over us all. I’ve been spending some time using my data prep and data visualization skillsets to produce some rough ideas of what’s going on in our nation.

Of course, I used Alteryx to wrangle all my data together. You can download (but not run, as it does use the download tool) the workflow here.

In this case, I used the NY Times dataset that provides county level caseloads. I used the Alteryx Business Insights data package to provide county and state level populations. I used an internally shared hospital beds dataset as well as a kss.org dataset on beds per capita. The former provided county level detail while the latter was strictly state level. If nothing else, they validate that the aggregate hospital beds nationwide is roughly between 800k and 1M. Again, I’m strictly looking for directional indicators, so even if I’m off by 20%, that’s ok for my purposes.

https://public.tableau.com/profile/josephschafer#!/vizhome/TimesC19Exploration/CountyCasesBeds

https://public.tableau.com/profile/josephschafer#!/vizhome/TimesC19Exploration/CountyCasesBeds

Truth be told, I’m rehashing a thesis that I tested back in 2014, when I was a candidate for a Sales Operations job working for Michael Mixon at Seagate. An ebola outbreak was ravaging West Africa and I made a prediction that while Sierra Leone at the time had the fewest total cases, their healthcare capacity was already maxed out, and thus forecast that they were most in need of assistance and triage. As my candidacy progressed, Sierra Leone took the lead in cases and consequent deaths and my thesis proved broadly correct. The world started seeing cases pop up outside of its origins, the US, among others, rallied to the cause, and we managed to avoid the worst case scenarios.

Early February this was on my radar, but it did seem like we had a chance to get ahead of this. Obviously that didn’t happen, and by late February I saw this likely to be an issue as China’s numbers came out started looking looking a little cleaner than one should expect - they still do, frankly, but that’s another story…

At any rate, what I see are some very intense caseloads relative to healthcare capacity in particular patches. I use either hospital beds or hospital beds per capita, (depending on the granularity of the data) and arrive at a rough approximation of a supply/demand analysis.

All the headlines out of NYC tend to validate the conclusion that as cases tread even close to the count of beds, the healthcare system is overburdened and more bad news will follow. And strikingly, Summit County, UT, which is currently the upper bounds for the cases/beds metric, is indeed suffering.

If you’re a data geek like me, check it out on Tableau Public. Reach out at Joseph.Schafer@Alteryx.com with questions, ideas, etc.

A Tableau-based Row Level Security Primer

I wanted to share some of the most useful Row Level Security articles that I’m aware of and talk high level about one of Tableau’s newer features, ‘multi-table extracts’.  I’ve deployed all these solutions as a customer at Seagate, once upon a time, so I’m happy to connect with other Tableau Champions and guide them through the minutiae of implementing this.

The most common misconception with Row Level Security is that you must duplicate data to make it work in an extract - This is not true and I’ve implemented alternatives successfully at scale. 

A quick summary of my experience with Row Level Security in Extracts is this: You can automate extracts to refresh even on an hourly cadence to get near-real time data that performs at scale. The most common misconception with Row Level Security is that you must duplicate data to make it work in an extract - This is not true and I’ve implemented alternatives successfully at scale.  You can use your preferred security tables and extract them to be updated hourly.

So, with some thoughtful data preparation or even just a minor reconfiguration, one can get near real-time data in a performant Hyper extract that’s sitting on server and from there you can apply row level security to it.  It’s a win-win, as you’re no longer dependent on an under-performing data platform for a live connection and when resorting to extracts, you’re not suffering the performance impact caused by inflating data with cross joins to duplicate data to set up row level security.

Multi-Table Extracts

https://www.tableau.com/about/blog/2018/10/you-can-now-choose-multiple-table-storage-extracts-94776

Because we now offer multi-table extracts, that’s another great option for solving the explosion of data due to row level security and there’s no data prep involved.  In the example below, you could have two tables like the following:

[Account] | [Authorized User]
NetApp    | Antonia Kealy
NetApp    | Joe Schafer

Joined by Account to:

[Account] | [Qty]
NetApp    | 100

Creating a denormalized data set like this: 

[Customer | [Authorized User] | [Qty]
NetApp    | Antonia Kealy     | 100
NetApp    | Joe Schafer       | 100

 

But because we’re storing them in the extract without joining in advance, it’ll only ever need to query one row and the data set won’t be duplicated in advance!  So, in short, we have lots of ways to efficiently scale Row Level Security in Tableau!  I’m happy to help folks work through some POC use cases on this topic.  For reference, there may be cases where Multi-Table Extracts aren’t as desirable as the fancier option of creating a concatenated string of authorized users.  Chiefly, when you already need to prepare the data in advanced ways, or you want to asynchronously pull your ‘metrics data’ compared to your ‘authorization data’.  For example, if your metrics data is so large that you can’t pull it more than once a day, but you want the authorization data pulled hourly still, that might be a good scenario to prefer one option over another.

Recommended Reading on Row Level Security:

General overview: 

My former manager’s blog on how we implemented RLS at scale at Seagate using a hybrid approach of Active Directory groups and row level security.  Note that not mentioned in his blog is a trick that shows up in the Part 2 article at the bottom of the list, where we later optimized our extract sizes using some data prep tricks.

Some additional strategies for implementing Row Level Security in parts 1 and 2:

The short version of the final article is instead of having two rows to give access to a NetApp row of data for two individuals with authority to see NetApp’s row of data, we’d have the latter single row of data, which, at scale, will drastically reduce the size of extracts.

Instead of this:

\\ filter logic: username() = [Authorized User]
\\ returns TRUE for Antonia and Joe
[Customer | [Authorized User] | [Qty]
NetApp    | Antonia Kealy     | 100
NetApp    | Joe Schafer       | 100

Use this:

\\ filter logic: contains( [Authorized Users] , username() )
\\ returns TRUE for Antonia and Joe
[Customer | [Authorized Users]         | [Qty]
NetApp    | Antonia Kealy, Joe Schafer | 100

A Mueller Report Wordcloud, or Who Needs Regex Anyway?

It’s Thursday, April 18th, 2019. Which means, for Americans and the world, it’s officially Mueller Time!

That said, it’s a workday and I don’t really have time to read 440 or so pages written by lawyers, even legendary lawyers, like the crew on the Special Counsel’s Office. What I do have time for is a word cloud. I’ve never really made a word cloud before, so let’s get to it! Here’s one flavor of the finished product. Simple, elegant, and quite Soviet.

Sheet 1.png

Step one: Find some words!

  1. Find some words. (Done.)

    1. While the original report is not searchable, a variety of folks have now offered searchable versions. I found mine on Google Drive.

  2. Copy/Paste the words from a searchable PDF into a powerful word processor.

    1. Notepad didn’t work for me. For some reason the clipboard was too big for Notepad. I first used OneNote, but I was worried it wouldn’t support the next step.

    2. So, for this I resorted to Notepad++ because my next step is to get this into a single column of words (and garbage characters, symbols, and numbers)

  3. In Notepad++ I performed a find/replace on all the spaces ‘ ‘ to convert them to carriage returns.

    1. This was my reference.

  4. Open my file in Tableau Prep for cleaning.

    1. Rename my column “Text”

    2. Clean -> Remove Punctuation

    3. Clean -> Remove Numbers

    4. Create a calculated field to clean up some of the weird remaining special characters

    5. Sort the remaining text in descending order. This will naturally do so by count([Text]) and exclude the common English language words that aren’t pertinent. Examples would include common articles, helper verbs, etc. “the”, “to”,”of",”and”,”that”,...

      1. This part’s got a bit of subjectivity to it, but my frame of reference is that ambiguous words or even less evocative words, like pronouns, “him”, “his”, “I”, aren’t helpful to the viz. I made this an iterative process.

    6. Lastly, and this is something one could also do in Tableau, but is useful to understand how to do in Tableau Prep Builder, I’m going to filter out the words with singular instances. I know this data set is going to have a long tail and much of these instances are either garbage data, due to the nature of reading in a PDF image and converting it to text, or it’s just not going to appear on the word cloud anyways, so this keeps the data set a bit more manageable.

      1. First, and this is the interesting bit, duplicate the [Text] Field, because in Prep Builder, you need one instance to Group By and one instance to Count() or otherwise to aggregate.

      2. Then, add an Aggregate step, and put the original [Text] in the Group By side and the duplicated [Text] in the Aggregate side and set it to Count.

      3. I then Joined the original Data set to the Count, because what I really want is to let Tableau do the aggregations after I filter out the least commonly used words.

      4. Add another clean step and filter by calculation. I opted for >= 10 because I want this data set to perform reasonably well.

    7. If you still see ‘common’ words which aren’t interesting, revisit Step 5 or exclude them in Tableau.

    8. Output to a Hyper file and build out your Word Cloud in Tableau.

    9. I also opted to filter out additional ‘ambiguous’ words in Tableau Desktop.

    10. I built a filter in Tableau Desktop to further limit the # of words by count. I settled on about 60#, which kept the viz manageable.

    11. You can find my workbook on Tableau Public: https://public.tableau.com/profile/josephschafer#!/vizhome/MuellerReportWordCloud/MuellerReportWordCloudrwb

flow.png
Mueller Report Word Cloud (rwb).png

Pushing Data out to Python using Alteryx and the 'Run Command' Tool

Let’s say you’re looking to invoke a .py file from Alteryx…

Take a look at the macro workflow. You’ll see 4 ‘Control Parameter’ tools at the top. Below each is an ‘Action Tool’. All are affecting the configuration of the ‘Run Command’ tool. Below that, a simple ‘Text Input’ tool passes a datetime field into a select tool, which helps confirm the datetime is formatted correctly. A Formula tool follows, replacing the original 1899-01-01 00:00:00 datetime with DateTimeNow(). This isn’t particularly relevant, except a Run Command tool requires either an input or an output and so we’re providing an input to in turn output a little file that write a last run time record.

macro workflow.png

If you need to call four arguments, you might opt to have four separate action tools, as depicted here, or to instead concatenate four fields into one and pass that concatenated field into one. Below, I’ve chosen to write in 4 unique placeholder arguments into the ‘Command Arguments’ field. Each will be uniquely addressed by a pair of tools chained together to substitute in a replacement payload from the parent workflow.

configure run command.png

As depicted below, There are four unique ‘placeholder’ args in the ‘Arguments’ field in the ‘Run Command tool. They are 1st 2nd 3rd 4th. Each has its own ‘Action Tool’ and in turn a ‘Control Parameter’.

name your fields in control parameter tool.png

The Control Parameter simply labels the component being edited. In my example, the 1st Arg gets labeled consistently for the placeholder string and in the Control Parameter label.

You can see how the ‘Action Tool’ is configured below:

configure action tools.png

Lastly, the easy parts:

parent workflow.png
4 args txt.png

You’ll load the macro with 4 arguments from your data source. In this case, I’m just using a ‘Text Input’ Tool. I only want my py file to run once, so in this case I’m content with passing only one record into the Macro. Pretty simple.

You’ll configure the macro such that the 4 fields from your data are correctly aligned to the 4 arguments of the macro. In the example, I tried to ensure accuracy by assigning the 4 fields names that match the numeric order they’ll need to be used with the py script.

4 args confugured macro.png

Thought on Organizing Content and Cross Team Content Management

As my Chemistry Prof often said, “keep it simple, stupid”.

 

For cross team ownership, much depends on the team members’ relative familiarity with the Source Data, any ETL work to Tableau, any Tableau Calculations, and any “Hacks” used to create the workbook.  If your pool of users is relatively unfamiliar with any of the above for a given workbook, you’ll want to have documentation built into the metadata, typically in the form of comments in calcs, commented metadata, and extensive, routine knowledge share.

 

A healthy analytics culture will encourage knowledge share as part of their routine, so that any new developments on the topics above are well-circulated amongst the pool of analysts.  If you have a well-trained team, then shared ownership isn’t too daunting.

 

From a management perspective, reward not just the brightest, but the brightest that share and educate others.  Those are the folks that you safe from the lottery, (aka the bus), problem.

 

When it comes to organizing dashboards in the form of “projects” and “workbooks”, I think the former is built on either need-to-access or subject matter and functional area, possibly both.  The perspective on openness within the culture of the company will play a big role in informing the access perspective.  That said, it’s also important to keep permissions and access paradigms as simple as possible.  Most organizations generally tend to over-complicate permissions and access, which leads to reduced productivity, confusion, frustration; all bad things.  Tableau is very open and leads by example on that front.

 

For workbooks, I prefer to keep them simple; there are other, better ways to merge content besides putting lots of different content into one monster workbook.  You can run meetings from a handful of links within the meeting agenda each going to multiple simple workbooks, to keep things straightforward, organized, coherent. 

 

This approach allows tags and search on Tableau Server to stay especially meaningful. For an example of what not to do, if you tag a single monster workbook with every subject under the sun,  because it has every single subject under the sun, then it dilutes the effectiveness of tags as a search index and people are easily lost and can’t find their content effectively.

Connecting Alteryx to Big Query Tables

https://cloud.google.com/bigquery/docs/enable-transfer-service

Install ODBC driver x64 on your local machine.  If you're going to have Server run your BigQuery workflows for you, you'll need to also install it on your Alteryx Servers.

 

 

You'll use your google account to Sign in and to check out a refresh token to authenticate your ODBC requests.  A word of caution: I don't know what happens when your account's password expires... Should be interesting to find out.  I also don't know how well this will work from the Server and what alternatives exist for a system account besides personal logins.

 

 

 

Embedding Tableau in Salesforce

If your org is anything like mine, your org is constantly looking for efficiencies, particularly improvements to the Sales Funnel and Sales Tasks.

A few years back, my team, which is a sales-oriented Business Intelligence team, was tasked with building out a series of embedded dashboards within our SFDC instance.  The goal was simple: push data to the Sales Reps and Account Managers within their platform.  If the sales team has an account page open, they should see that account's metrics.  This keeps everyone on the same page, in the same tool, and focused on Sales Tasks, rather than herding reports and filtering reports, reconciling data and chasing it around, week after week.

The Plan^tm

To deliver on our goal, we brought together a significant number of resources.

As Dashboard Developers, we wanted to standardize look and feel to SFDC's environment.

As Dashboard Developers, we participated in discussions  to gather business requirements from our Sales Enablement team and the Sales Staff.  The former took on the project management role and did a great job in facilitating our meetings with both sales executives  and our sales teams. 

Key to these discussions were the refinement and standardization of KPIs across regional sales teams and product owner silos.  Based on these meetings, we would ultimately include a few "have-your-cake-and-eat-it-too" type of considerations, where we'd use parameters to allow for both gross and net, or to allow for different filters (such as including consumer, or flash or excluding one or both). 

Fortunately for us, a significant share of our dashboards were based in some existing dashboard or report, and  so the effort was focused primarily around adding a few dimensions to various data, standardizing data sets wherever possible, and applying a standard look-and-feel.

Row Level Security Explained

We also made sure to tack on row-level security, which is a pretty cool feature.  I wish I could take credit for this, but it was a collaboration between IT, Tableau and driven principally by my boss and mentor, Mike Mixon.

As mentioned briefly, part of our requirements were row level security for Account Reps and Regional and Global access based out of our LDAP Directory.   This was done using a boolean f  statement.  This first true|false calculation is what gets imposed as a data source filter on the published data source on Tableau Server.

I abbreviated the logic for readability:

[User Check]

IIF(
(ISMEMBEROF("Global Tableau User"))
or
(ISMEMBEROF("AMER Tableau User") AND [Region] = "AMER")
...
(ISMEMBEROF("JAPAN Tableau User") AND [Region] = "JAPAN")
or
(CONTAINS([ID Match],[User Logged In])),1,0) = 1

At the end, you'll notice there's a couple additional calculations that need explaining.

[User Logged In] is simply:

USERNAME()

[ID Match]:

// MID  Within a Long List of IDs ([Associated IDs], returns the location of [User Logged In]
// FIND Finds the [User Logged In] location within [Associated IDs]
// LEN The Length of [User Logged In] ensuring a complete match.

IF [Find] >0 THEN  
MID([Associated IDs],
FIND([Associated IDs],[User Logged In]),
LEN([User Logged In]))
ELSE "" 
END

You might ask, "how did we get a field of "Associated IDs"?  Well, for that we pulled the data from SFDC, using some help from IT, to determine who each Account Owner was, and then traversed up the sales hierarchy for each Account Owner to determine their manager, and then their manager, etc, iterating up the chain of command.

Starting with an outputted long, narrow list of each combination of Account and Personal ID, with authority to see that account, we then did a relatively simple transform in Alteryx to concatenate each and every ID that has visibility rights into a given Account.  That was then joined at the Account ID level to the data set as part of the data preparation for each pertinent data source.

 

Still to Document:

Actions

(how to filter, and how url-out-to-salesforce (create tasks, open oppos, etc).  I have already talked a bit about how to make a URL action work, in a previous blog post here.  Basically the only difference is that instead of passing <stock> combined with a google finance URL, you'd pass an opportunity record

Data Governance

SFDC matched to EDW, make sure account ownerships, for example, are correct and up-to-date.  We have guard rails in place, for example, to ensure that accounts owned by inactive users are flagged and sent to Sales Managers.

 

 

Caveats:

A little shop-talk: So far, we're an exclusively on-prem Tableau shop.  I have heard that Tableau Online makes this a bit easier with regard to authentication, which is not my team's purview.

For Tableau on-prem shops, there's a tool called Sparkler that passes authentication from Salesforce into Tableau's on-prem server.   For us, it hasn't been a flawless experience as authentication technology evolves, so if you're on-prem, suffice it to say that you'll need to allow for some resources for your authentication folks, so as to integrate the authentication process in a seamless state. 

In our experience, this isn't a one-time IT investment, but an integration capability that needs to be kept up-to-date as authentication standards and tableau versions evolve over time.  I'm not certain what a cloud-to-cloud integration looks like, so your mileage may vary if that's the sort of landscape you're in.

Our mandate includes phone, tablet, and desktop support from within Salesforce.  Far and away the trickiest to implement is Tableau via Phone via Salesforce App.  There seem to be limitations to what the Salesforce Canvas (think "browser") is capable of.  We still have a few bugs there, mostly around actions that involve navigation from a dashboard, to an SFDC record (via a URL Action in Tableau), and then back to the dashboard.  This seems to be more common on the iPhone platform than the Android platform.  If you're performing the same function from within the Tableau Mobile App, there's no issue, thus the suspicion falls onto Salesforce Canvas.

Going from the Dashboard to a particular opportunity via a URL Action is wicked cool.  Pre-populating a task form, based on the dashboard data is similarly awesome and pretty straightforward, once you acquaint yourself with the proper fields and how they map to your Tableau data.

For example, here's a heavily commented draft of a Tableau Calculated Field that opens a new task with some information in the task form pre-populated based on the data set.  It's essentially assigning a task to the account owner with a comment about how the opportunity is past-due.  It's the sort of action that would be expected to come from a periodic management review of past-due pipeline opportunities, to close the loop on over-due opportunities.  In roughly two clicks, a task requesting an update would be assigned to that rep!

We have seen a few issues with the mobile Salesforce App - usually around actions, such as navigating back to a dashboard, and they're more common from the Apple side.  One method that I would recommend 

#Row Level Security USERNAME() member of (LDAP group).  Here's how we do account level security:

 

Reflections on Interviewing

This post is a deviation from my usual subject matter and should come with a caveat - I’m not an expert on this topic and maybe I’m the one that needs advice. 

I’ve been honored to have been asked to perform quite a bit of interviews for my current employer lately and even participate in both sides of the same process for a particular internal position.  I think that somewhat atypical scenario of dueling perspectives was the genesis for this post.

download.jpeg

I’ve always taken a very informal tone as an interviewer.  I do so for a few reasons.  

I hope to share my best self: Easy to collaborate with and happy to be here, doing my part for the company and the team.

My goal is to disarm and de-stress an interviewee.  Interviewees tend to be tense and perhaps aren’t their best selves when tense and anyway the more formal the atmosphere, in my experience, the more vanilla and canned the response.  

interrogation.jpg

In summary, I desperately want any interview I participate in to feel nothing like an interrogation, lest a promising prospective candidate imagine that’s the norm for the job.

I’ve walked out following an interview before, where I was genuinely exhausted and stressed.  I wouldn’t even have accepted the job and I wouldn’t be remotely surprised if I subconsciously torpedoed my interview as if I had some kind of programmed mental aversion to unnecessarily stressful work environments.  Did I mention I don’t provide emergency medical care?  (For good reason, I’d imagine).

jill-greenberg-crying-photoshopped-babies-end-times-17.jpg

There’s a place for stress in our modern lives.  It’s our body’s way of expressing fear, uncertainty, danger.  To the extent that stress helps us identify and, within reason, avoid or mitigate such conditions, stress is meaningful and useful.  Stress, well applied, should cause us to study, to work hard, but not to the point of exhaustion.  It should bring out a seriousness and focus, but not paralysis and agony.

In my view, taking a naturally stressful situation and needlessly amplifying it is simply cruel. In my own observations, I’m not talking in abstractions.  It’s rooted in real tears, sleeplessness, and hurt.

I’m happy to say that I think my methods get the right results.  I end up with a better sense for the person behind the candidate and I convey a truer feel for our prospective working relationship.