a satire by The Pudding
You’re Only Human.
Don’t Let That Stop You From Advancing Your Career.
The job market feels more competitive than ever. Small missteps that may once have been dismissed by human forgiveness may now automatically dispatch a computer-generated rejection to your inbox. Did you comment in defense of a Jeff Dunham video back in 2009? You might be flagged as having antisocial tendencies. Did you accidentally use the British spelling of “conceptualize” on your CV? You’re illiterate. Your participation in a quirky Brainfuck hack-a-thon might put you on a black list for abusive language.
When an estimated 70% of job applications are automatically rejected by applicant tracking systems (ATS) and companies are screening out applicants based on their social media habits, it can seem like technology is up against you.
That's because it is. Well, most of it anyway. At deepwork, our software works for you, so you can work. Period.
With our cutting-edge technology, you’ll be more than just another statistic. We are able to algorithmically alter your digital DNA, from your overwrought resume down to your Zoom-fatigued face, turning you into something that’s proven to resonate more powerfully with employers, recruiting agencies, and the state-of-the-art ATS they use. And with Klarna, you can pay later. You know, when you can afford it.
Grammarly can't save your garbage resume. Experience can.
Unfortunately, as far back as 2014, journalists and academics have noted the emergence of an industry-standard paradox of requiring two or more years of relevant work experience for an entry-level job. If you don't have the social connections to make the right introductions (you don't) and can't afford to both intern and eat for the next half decade, then it’s time to invest in a better future with our Resume Atelier. Student loan repayments be damned.
Explore samples of our STEMsational real fake resumes in our product demo here. Simply use the dropdown menu to select your aspirational level of experience and pivot your way to success!
Using cutting-edge neural network technology trained on thousands of resumes, deepwork can generate a brand new resume for you in your field of choice.
Some of the entries don’t make sense? Don’t worry: Recruiters don’t know the difference. Sign up to create a plagiarism bot-beating one-of-a-kind resume that represents your value as a worker better than your achingly sincere cover letter ever could.
Profile Picture Studio
Grow up, glow up, or man up your profile picture game.
You’re not the only one job hunting right now, and as long as looks continue to influence hirings and salaries, you’ll need all the help you can get.
Today’s LinkedIn profile photo is what a firm handshake was thirty years ago: A totally arbitrary signifier of your entire worth as a professional. You might be able to Facetune that zit away but making your LinkedIn picture recruiter-ready might be a little more complicated. If you’re too young, too old, too ugly, too female—you might just stay too underemployed.
Play with the sliders on any of our models to get an idea of how you could grow up, glow up, or man up your own photos.
Luckily, with deepwork’s Profile Picture Studio, you can alter photos of yourself to be younger, hotter, or more masculine. The resulting photo will still be you, just a you that looks a bit more suitable for an “About Us” page.
A future version of deepwork will include a glow up feature that will make your inner beauty more evident on the exterior. We’ve experimented with off-the-shelf solutions, but they’re plagued by the same technical failings that equate whiteness with beauty, which doesn’t represent true beauty nor our transformative aims. We want to foster workplace inclusivity (of a shinier, more impressive you), not perpetuate corporate segregation. Until we unveil our custom solution, you can explore the blind spots of the available technology on the “Basic to Yassified” slider, and rest assured that we are working hard to remove the colorist tendencies of this feature.
Choose a Photo
Not all biases are so easy to place. Like class, for instance. Maybe your look says “Bud Light Platinum” more than it says “Moët & Chandon Réserve Impériale.” We can improve your odds of being a good company “fit” by combining your profile picture with various stock photo models from our library.
Use the controls to blend two photos together.
If this feels like a gross intrusion on one of the most basic representations of who you are as an individual, remember: Nothing personifies the great melting pot of corporate culture like a stock photo.
As an added benefit, using deepwork to alter your appearance will make it less likely for potential employers to conclusively recognize you on your other social media, just in case you forgot to delete that cinnamon challenge video.
Mix Your Pics
Thought Leadership Package
At a certain point in your career, you’ll want to signal to potential employers that you’re ready to take on more responsibility (and a higher paycheck).
The best way to showcase your suitability for a higher-ranking position is via an elaborate mating ritual known as Thought Leadership.
Thought Leadership refers to the production of materials on social media or personal blogs with the sole purpose of demonstrating how well your personality can conform to a specific corporate aesthetic. For Thought Leadership to be successful, its creator must walk a fine line: Demonstrate a thorough immersion in corporate culture—think soundbite-ready non-opinions like “It’s now more essential than ever that moms understand the environmental impacts of big data”—while sharing a few relatable work-appropriate interests (like cycling or kite surfing) to maintain the illusion of personhood.
Try out our technology by combining Tweets from some of our favourite (but not affiliated) idealogues.
Performative pandering to an unknown audience not your cup of tea? That’s true for everyone. But recruiters and employers are looking at your social media whether you like it or not, so you might as well put on a good show. As a relatively new concept, it’s hard to quantify the impact thought leadership has on your career trajectory. We’re not above invoking career advice iconoclasts like Askmen.com or DHL to convince you that you need to become a Thought Leader today.
Demonstrating Thought Leadership on your social media can be daunting, especially for first-timers who may be led astray by the misconception that the term inherits any meaning from either of its component words. If that sounds like you, deepwork’s Thought Leadership package lets you stand on the shoulders of some of the most influential egos in your field.
Head to our product page to explore pricing for our one-time service, as well as our subscription packages that support weekly, daily, hourly, and quarter hourly posting. If you’re feeling bold, signing up for our package will allow you to throw your own social media handle into the mix to hybridize your own thoughts with objectively more successful people.
Video Conferencing Plug-in
When opportunity calls, don’t be afraid to pick up.
Though deepwork can’t help you with technical interviews (take a look at CoPilot for that), we can help you make a more professional first impression in your Zoom interviews. (If the concept of a late-stage interview is foreign to you, consider our Resume Atelier and Profile Picture Studio.)
Our Video Conferencing Plug-in package uses all the same looks optimization technology available in our Profile Picture Studio package, but takes it to the next level by letting you assume your avatar’s much more hirable appearance for video calls.
You can preview our video generation process here, and watch your chosen face act out expressions such as “In this economy?” Subscribe to gain access to even more reactions, including, “Thoughtful consideration,” and, “Gen Z, what are you gonna do?”
Our technique utilizes something experts call the First-order Motion Model method, which is widely associated with so-called “deepfake” technology. But don’t let media pundits and their pretty privilege put you off—what’s fake about your commitment to attaining meaningful employment at any cost?
With the rising popularity of flexible work arrangements, we’ve responded to intense customer demand by expanding our subscription services for longer term packages! After you score that dream role, there’s no reason to spoil the fantasy. As long as you’re WFH, your coworkers will never have to know you’re a sweaty, poorly lit, recent grad and/or ugly.
Ready to deepwork?
deepwork is trusted by hundreds of thousands of underemployed, underpaid and underappreciated individuals to solve at least that first problem. Who knows? You might have a deepworker in your LinkedIn network already.
Don’t believe our technology can help you unlock your true earning potential? Listen to what our customers have to say about deepwork.
An absolute breeze to onboard!
I was initially a bit morally conflicted about deepwork, but their onboarding was so seamless, I barely had time to think about it!
deepwork gets it
FINALLY! A company that acknowledges that getting hired in the 2020s is a dog-eat-dog world. deepwork is like giving a mecha suit to a chihuahua.
Doubled my earnings
Being a man in the e-romance business has always put me at a bit of a disadvantage. But with deepwork, I’ll have enough money for my “passport and plane ticket over” very soon.
A breath of fresh air
Most businesses work best when they can replace a malfunctioning cog as easily as ordering a spare part from an Ikea catalogue. The fact that you're a unique human being doesn't even factor in. Stop playing their games and get deepwork.
While deepwork isn’t a real company (yet), all of the technology used to generate our feature demos comes from real, easy to access code you can find online, and all of the grief comes from real trends observed in at least some corporate settings. That said, it’s not that serious. If you enjoyed the article, keep reading for more about my process.
- "Woman in pink crew-neck in closeup photography” by Mathias Huysmans
- "Woman in white dress shirt" by Christina @ wocintechchat.com
- "Man in gray and black checkered sport shirt and black pants outfit" by Himanshu Dewangan
- "Man standing beside wall" by LinkedIn Sales Solutions
- "Man wearing black and teal dress suit standing near gray wall" by Gregory Hayes
- "Woman in black and white striped shirt sitting on chair" by Ty Feague
- "Man wearing gray suit jacket and dress pants" by Roland Samuel (picture no longer on Unsplash)
- "Woman smiling and sitting" by Christina @ wocintechchat.com
- "Happy senior man giving thumb up, sitting at desk using laptop computer at home." by StockLite
Initially, I had wanted to finetune a GPT-2 model on resume data myself, similar to what I did in the Thought Leadership demo. Unfortunately, even though your LinkedIn profile information is definitely available online and it’s perfectly legal to continue to scrape that data for now, I really didn’t want to insert myself or The Pudding team into that type of debacle. The remaining freely available datasets hosted on sources like Kaggle I found either too sparse or too specific to work with (at least at the time of writing).
Fortunately, Max Woolf, Thomas Davis, and the folks behind JSON Resume already did the hard work of creating an open-source fake resume generator based on a recurrent neural network. Their models were trained off of approximately 6,000 real job resumes, alluded to be resumes sourced from Github gists (perhaps explaining the technical lean of the resumes generated).
All the resumes generated in this feature were trained off of the same models provided by the authors. The differences in the experience levels are represented only in the amount of data produced for certain fields (e.g., “Student” resumes generally have fewer examples of work experience than “Intermediate” or “Senior” resumes) and “Students” work at specific roles for shorter periods of time overall.
Profile Picture Studio
The features used in Profile Picture studio are all based on the same principle. In lay terms, you take a model trained on many photos of people and computationally create a new artificial face that closely resembles your photo of choice. Then you tweak the newly generated face. In not so lay terms, this is called projecting the photo into the latent space of the model. Jason Brownlee provides a reasonable explanation of the technique on his website (for those of us with interest, but without the patience to parse an academic paper).
The demos in this article were made using Github user Woctezuma’s code for projecting and altering photos. Woctezuma uses NVIDIA’s StyleGAN2 model which has been trained on 70,000 photos of people sourced from Flickr (known as FFHQ).
Though I discussed it briefly in the article, I wanted to again acknowledge the fact that the “Basic” to “Yassified” filter generally gives subjects lighter skin and predominantly European features the more “yassified” it is set. I do not equate white skin with physical beauty, and neither does The Pudding. This is a consequence of the underlying technology used. In writing this explanation, I don’t mean to excuse the implications of this filters’ use nor others like it, but it is beyond my capabilities to fix it and I wanted to include some representation of digital “beautification.”
Despite my personal inability to “fix” the issue of racial and other biases in data science involving images of people (e.g., consider this write-up on artist Jake Elwes’s work around queer representation in datasets used in data science), I wanted to provide a bit more information on how this phenomenon comes about, particularly around beauty filters and beauty assessment.
The specific beautification attributes I used comes from Woctezuma’s code, who in turn used code from a user’s project called “seeprettyface.” The website for seeprettyface notes that for creating the labeled data required to determine a generated picture’s attributes (e.g., beauty, gender presentation, smile, etc.), they used a combination of Microsoft’s image recognition API and Baidu Cloud. Baidu Cloud’s Chinese language API documentation describes features for measuring and enhancing the image subjects’ beauty. I could not find any documentation in Microsoft’s API documentation on beauty measurement or manipulation, which suggests to me that the measurements of beauty seeprettyface used to create the labeled data necessary to make the “beautification” filters I used comes from Baidu.
I cannot find a specific reference that identifies how Baidu measures beauty. This may be a failure of my searching, a language barrier, or because the company wishes to protect trade secrets. In lieu of specific information on Baidu’s methods, I can speak to some of the methods and factors that impact other popular beauty filters or digital measurements.
It is important to understand that in order to measure or create a filter for facial beauty, Baidu’s algorithm very likely first needed to find or create some form of labeled data on facial beauty, as is common practice in most machine-learning tasks. They may have created their own dataset from scratch and had people rate each picture, used an existing dataset of faces and had people rate those pictures, or they may have used a dataset created specifically for facial beauty perception that already contains peoples’ ratings for the subjects’ attractiveness. It’s possible that Baidu used a different method, unlikely given the state of the art.
Many sources, including experts speaking on the AI beauty contest in the Guardian article linked in the main text, describe lack of representation of minorities in underlying datasets as a key issue in algorithmically measuring or creating beauty. This is a real issue. Though the dataset FFHQ described previously uses images posted on Flickr, other datasets are more narrow in the subjects represented. For example, the CelebFaces Attributes (CelebA) Dataset created by researchers at The Chinese University of Hong Kong is popular for a variety of data science tasks related to human faces. One paper associated with the creation and use of the dataset was cited by over 4,500 people at the time of writing according to Google Scholar. The popular machine learning library TensorFlow includes CelebA in its data catalog.
As implied by its name, CelebFaces Attributes contains pictures of over 200,000 celebrities’ faces. Though the creators don’t appear to disclose how they chose which celebrities to include, the images appear to be of predominantly Western celebrities. This is supported by sources like the creators of the Diversity in Faces (DiF) dataset, who determined that about 86% of subjects pictured in CelebA had lighter skin tones, and about 78% of subjects were under the age of 46–which doesn’t exactly sound out of line with the demographics of Western celebrities. The authors behind DiF found similar levels of unbalanced age and skin tone representation in other popular face datasets.
We can also consider as an example the SCUT-FBP5500 dataset–a dataset created specifically for algorithmically measuring facial beauty–contains only pictures of Asian and Caucasian subjects. Each photo in SCUT-FBP5500 was given a score between 1-5 for beauty from a team of volunteers aged between 18-27, which invites us to consider the bias introduced by those determining what constitutes beauty in datasets labeled for those purposes.
A final dataset worth considering in this discussion is the HotOrNot dataset. In this dataset, the authors use a subset of photos and their ratings from the website HotOrNot.com. HotOrNot was a website that invited users to rate the hotness of user-submitted photos from 1-10. Mashable recounts its rise to fame and eventual downfall in this piece honoring the site’s 20th birthday. Anecdotally, as a former dumbass kid with the open mindedness of an early 2000s Friends episode, HotOrNot was mostly popular with dumbass kids with the open mindedness of early 2000s Friends episodes. Though not as popular as the other datasets described, the HotOrNot dataset has been used in research on measurement or adjustment of facial “beauty,” such as in this paper on a method for facial beauty prediction.
In reality, the researchers behind all of these datasets bear little of the blame for the lack of representation in their data or methods. It’s a normal practice for researchers to scope their experiments just so that they can test a specific thing. One intended use of CelebA, for example, was to develop and test a method for identifying a person’s face in a variety of different circumstances. So, it makes sense that the authors opted to use subjects that it is easy to find pictures of. One output discussed in the creation of SCUT-FBP5500 is to provide data that can specifically be used to discuss the difference in perception of beauty (specifically the distribution of scoring) in Asian subjects versus Caucasian subjects. Though not explicitly stated by its creators, it is likely the HotOrNot dataset was conceived as a way of gaining a large number of images that had already been rated by a large number of people (the authors only used images with more than 100 ratings) as these kind of tasks can be time consuming and expensive, particularly for that amount of input.
Though there’s certainly more many researchers can do to address lack of representation and other bias in their datasets, much of the folly that comes from attempting to algorithmically measure or enhance beauty–beyond the question of why?–stems from the misuse of data produced for specific tests under specific circumstances. Or, in some cases, the issue may stem from the lazy creation of new datasets for general-purpose commercial products without due consideration of the biases introduced by minimal diversity in the subjects pictures, or in preferences introduced by a largely homogenous group evaluating the images. We can see the results of these issues in the biases found in everything from AI beauty contests, to commercial product APIs, to satirical essays on corporate culture.
Thought Leadership Package
The Thought Leadership package uses OpenAI’s GPT-2 text generation model trained on a dataset composed of 8 million web pages that have been posted on Reddit. I used Max Woolf’s excellent gpt-2-simple Python library to simplify the process.
To generate text specific to my selected thought influencers, I downloaded each thought leader’s 3,200 most recent Tweets (the limit imposed by the Twitter API). I excluded replies and retweets. Due to this, and potentially due to the user’s lower total Tweet count, some users had fewer Tweets to train on than others. I then fine tuned the GPT-2 model on Tweets from each specified combination of thought leader. Due to the uneven distribution of the number of Tweets available for each user, some of the generated outputs resemble one author more than the other. We curated the Tweets to omit: entirely incoherent gibberish; Tweets mentioning real (non-mega celebrity) people by name; anything too dark or political; and Tweets that contained too many identifiable fragments of text from the training data.
Like the rest of this article, my description of thought leadership is tongue-in-cheek. Furthermore, the selection of thought leaders was not based on any structured methodology, rather, they were nominated to me by friends or found through Google search. Their inclusion was largely based on their influence in their respective fields and their strong social media presence.